Assignment 4 - Neural Networks¶
Jiechen Li¶
Netid: jl1254
Names of students you worked with on this assignment: Lin Hui, Yabei Zeng, ChatGPT for concepts, machine learning python code, formula clarification and grammar check.
Note: this assignment falls under collaboration Mode 2: Individual Assignment – Collaboration Permitted. Please refer to the syllabus for additional information.
Instructions for all assignments can be found here.
Total points in the assignment add up to 90; an additional 10 points are allocated to presentation quality.
Learning objectives¶
Through completing this assignment you will be able to...
- Identify key hyperparameters in neural networks and how they can impact model training and fit
- Build, tune the parameters of, and apply feed-forward neural networks to data
- Implement and explain each and every part of a standard fully-connected neural network and its operation including feed-forward propagation, backpropagation, and gradient descent.
- Apply a standard neural network implementation and search the hyperparameter space to select optimized values.
- Develop a detailed understanding of the math and practical implementation considerations of neural networks, one of the most widely used machine learning tools, so that it can be leveraged for learning about other neural networks of different model architectures.
1¶
[60 points] Exploring and optimizing neural network hyperparameters¶
Neural networks have become ubiquitous in the machine learning community, demonstrating exceptional performance over a wide range of supervised learning tasks. The benefits of these techniques come at a price of increased computational complexity and model designs with increased numbers of hyperparameters that need to be correctly set to make these techniques work. It is common that poor hyperparameter choices in neural networks result in significant decreases in model generalization performance. The goal of this exercise is to better understand some of the key hyperparameters you will encounter in practice using neural networks so that you can be better prepared to tune your model for a given application. Through this exercise, you will explore two common approaches to hyperparameter tuning a manual approach where we greedily select the best individual hyperparameter (often people will pick potentially sensible options, try them, and hope it works) as well as a random search of the hyperparameter space which as been shown to be an efficient way to achieve good hyperparameter values.
To explore this, we'll be using the example data created below throughout this exercise and the various training, validation, test splits. We will select each set of hyperparameters for our greedy/manual approach and the random search using a training/validation split, then retrain on the combined training and validation data before finally evaluating our generalization performance for both our final models on the test data.
# Optional for clear plotting on Macs
%config InlineBackend.figure_format='retina'
# Some of the network training leads to warnings. When we know and are OK with
# what's causing the warning and simply don't want to see it, we can use the
# following code. Run this block
# to disable warnings
import sys
import os
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
os.environ["PYTHONWARNINGS"] = "ignore"
import numpy as np
from sklearn.model_selection import PredefinedSplit
# -----------------------------------------------------------------------------
# Create the data
# -----------------------------------------------------------------------------
# Data generation function to create a checkerboard-patterned dataset
def make_data_normal_checkerboard(n, noise=0):
n_samples = int(n / 4)
shift = 0.5
c1a = np.random.randn(n_samples, 2) * noise + [-shift, shift]
c1b = np.random.randn(n_samples, 2) * noise + [shift, -shift]
c0a = np.random.randn(n_samples, 2) * noise + [shift, shift]
c0b = np.random.randn(n_samples, 2) * noise + [-shift, -shift]
X = np.concatenate((c1a, c1b, c0a, c0b), axis=0)
y = np.concatenate((np.ones(2 * n_samples), np.zeros(2 * n_samples)))
# Set a cutoff to the data and fill in with random uniform data:
cutoff = 1.25
indices_to_replace = np.abs(X) > cutoff
for index, value in enumerate(indices_to_replace.ravel()):
if value:
X.flat[index] = np.random.rand() * 2.5 - 1.25
return (X, y)
# Training datasets
np.random.seed(42)
noise = 0.45
X_train, y_train = make_data_normal_checkerboard(500, noise=noise)
# Validation and test data
X_val, y_val = make_data_normal_checkerboard(500, noise=noise)
X_test, y_test = make_data_normal_checkerboard(500, noise=noise)
# For RandomSeachCV, we will need to combine training and validation sets then
# specify which portion is training and which is validation
# Also, for the final performance evaluation, train on all of the training AND validation data
X_train_plus_val = np.concatenate((X_train, X_val), axis=0)
y_train_plus_val = np.concatenate((y_train, y_val), axis=0)
# Create a predefined train/test split for RandomSearchCV (to be used later)
validation_fold = np.concatenate((-1 * np.ones(len(y_train)), np.zeros(len(y_val))))
train_val_split = PredefinedSplit(validation_fold)
To help get you started we should always begin by visualizing our training data, here's some code that does that:
import matplotlib.pyplot as plt
# Code to plot the sample data
def plot_data(ax, X, y, title, limits):
# Select the colors to use in the plots
color0 = "#121619" # Dark grey
color1 = "#00B050" # Green
color_boundary = "#858585"
# Separate samples by class
samples0 = X[y == 0]
samples1 = X[y == 1]
ax.plot(
samples0[:, 0],
samples0[:, 1],
marker="o",
markersize=5,
linestyle="None",
color=color0,
markeredgecolor="w",
markeredgewidth=0.5,
label="Class 0",
)
ax.plot(
samples1[:, 0],
samples1[:, 1],
marker="o",
markersize=5,
linestyle="None",
color=color1,
markeredgecolor="w",
markeredgewidth=0.5,
label="Class 1",
)
ax.set_title(title)
ax.set_xlabel("$x_1$")
ax.set_ylabel("$x_2$")
ax.legend(loc="upper left")
ax.set_aspect("equal")
fig, ax = plt.subplots(constrained_layout=True, figsize=(5, 5))
limits = [-1.25, 1.25, -1.25, 1.25]
plot_data(ax, X_train, y_train, "Training Data", limits)
The hyperparameters we want to explore control the architecture of our model and how our model is fit to our data. These hyperparameters include the (a) learning rate, (b) batch size, and the (c) regularization coefficient, as well as the (d) model architecture hyperparameters (the number of layers and the number of nodes per layer). We'll explore each of these and determine an optimized configuration of the network for this problem through this exercise. For all of the settings we'll explore and just, we'll assume the following default hyperparameters for the model (we'll use scikit learn's MLPClassifier as our neural network model):
learning_rate_init= 0.03hidden_layer_sizes= (30,30) (two hidden layers, each with 30 nodes)alpha= 0 (regularization penalty)solver= 'sgd' (stochastic gradient descent optimizer)tol= 1e-5 (this sets the convergence tolerance)early_stopping= False (this prevents early stopping)activation= 'relu' (rectified linear unit)n_iter_no_change= 1000 (this prevents early stopping)batch_size= 50 (size of the minibatch for stochastic gradient descent)max_iter= 500 (maximum number of epochs, which is how many times each data point will be used, not the number of gradient steps)
This default setting is our initial guess of what good values may be. Notice there are many model hyperparameters in this list: any of these could potentially be options to search over. We constrain the search to those hyperparameters that are known to have a significant impact on model performance.
(a) Visualize the impact of different hyperparameter choices on classifier decision boundaries. Visualize the impact of different hyperparameter settings. Starting with the default settings above make the following changes (only change one hyperparameter at a time). For each hyperparameter value, plot the decision boundary on the training data (you will need to train the model once for each parameter value):
- Vary the architecture (
hidden_layer_sizes) by changing the number of nodes per layer while keeping the number of layers constant at 2: (2,2), (5,5), (30,30). Here (X,X) means a 2-layer network with X nodes in each layer. - Vary the learning rate: 0.0001, 0.01, 1
- Vary the regularization: 0, 1, 10
- Vary the batch size: 5, 50, 500
This should produce 12 plots, altogether. For easier comparison, please plot nodes & layers combinations, learning rates, regularization strengths, and batch sizes in four separate rows (with three columns each representing a different value for each of those hyperparameters).
As you're exploring these settings, visit this website, the Neural Network Playground, which will give you the chance to interactively explore the impact of each of these parameters on a similar dataset to the one we use in this exercise. The tool also allows you to adjust the learning rate, batch size, regularization coefficient, and the architecture and to see the resulting decision boundary and learning curves. You can also visualize the model's hidden node output and its weights, and it allows you to add in transformed features as well. Experiment by adding or removing hidden layers and neurons per layer and vary the hyperparameters.
(a)¶
# first,train the MLPClassifier with the given hyperparameters
from sklearn.neural_network import MLPClassifier
from mlxtend.plotting import plot_decision_regions
def train_and_plot_decision_boundary(
X, y, hidden_layer_sizes, learning_rate_init, alpha, batch_size
):
"""Initialize the MLPClassifier with the given hyperparameters"""
clf = MLPClassifier(
hidden_layer_sizes=hidden_layer_sizes,
learning_rate_init=learning_rate_init,
alpha=alpha,
solver="sgd",
tol=1e-5,
early_stopping=False,
activation="relu",
n_iter_no_change=1000,
batch_size=batch_size,
max_iter=500,
random_state=42,
)
# train the MLPClassifier
clf.fit(X, y)
# 1. Vary the architecture (`hidden_layer_sizes`) by changing the number of nodes per layer
# while keeping the number of layers constant at 2: (2,2), (5,5), (30,30). Here (X,X)
# means a 2-layer network with X nodes in each layer.
from matplotlib.colors import ListedColormap
from sklearn.neural_network import MLPClassifier
# Define the color with C map
color0 = "#121619" # Dark grey
color1 = "#00B050" # Green
color_map = ListedColormap([color0, color1])
# Function to plot decision boundary
def plot_decision_boundary(clf, X, y, ax, title):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01))
# Predict the function value for the whole grid
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
ax.contourf(xx, yy, Z, alpha=0.4, cmap=color_map)
scatter = ax.scatter(X[:, 0], X[:, 1], c=y, s=20, edgecolor="k", cmap=color_map)
# Create a legend for the scatter plot
legend1 = ax.legend(*scatter.legend_elements(), loc="upper left", title="Classes")
ax.add_artist(legend1)
ax.set_title(title)
# Function to create a grid of subplots for different hyperparameter settings
def train_and_plot_grid(X_train, y_train):
# Define hyperparameter settings
hidden_layer_sizes_options = [(2, 2), (5, 5), (30, 30)]
learning_rate_init_options = [0.0001, 0.01, 1]
alpha_options = [0, 1, 10]
batch_size_options = [5, 50, 500]
# Set up the figure
fig, axes = plt.subplots(4, 3, figsize=(15, 20))
axes = axes.ravel() # Flatten the axes array
# Default hyperparameters
default_params = {
"hidden_layer_sizes": (30, 30),
"learning_rate_init": 0.03,
"alpha": 0,
"batch_size": 50,
"max_iter": 500,
"random_state": 42,
}
# Generate plots for each hyperparameter variation
for i, size in enumerate(hidden_layer_sizes_options):
params = default_params.copy()
params["hidden_layer_sizes"] = size
clf = MLPClassifier(**params).fit(X_train, y_train)
plot_decision_boundary(clf, X_train, y_train, axes[i], f"Hidden Layers: {size}")
for i, rate in enumerate(learning_rate_init_options):
params = default_params.copy()
params["learning_rate_init"] = rate
clf = MLPClassifier(**params).fit(X_train, y_train)
plot_decision_boundary(
clf, X_train, y_train, axes[i + 3], f"Learning Rate: {rate}"
)
for i, alpha in enumerate(alpha_options):
params = default_params.copy()
params["alpha"] = alpha
clf = MLPClassifier(**params).fit(X_train, y_train)
plot_decision_boundary(
clf, X_train, y_train, axes[i + 6], f"Regularization (alpha): {alpha}"
)
for i, batch in enumerate(batch_size_options):
params = default_params.copy()
params["batch_size"] = batch
clf = MLPClassifier(**params).fit(X_train, y_train)
plot_decision_boundary(
clf, X_train, y_train, axes[i + 9], f"Batch Size: {batch}"
)
plt.tight_layout()
plt.show()
# Generate and plot the data
np.random.seed(42)
noise = 0.45
X_train, y_train = make_data_normal_checkerboard(500, noise=noise)
train_and_plot_grid(X_train, y_train)
(b) Manual (greedy) hyperparameter tuning I: manually optimize hyperparameters that govern the learning process, one hyperparameter at a time. Now with some insight into which settings may work better than others, let's more fully explore the performance of these different settings in the context of our validation dataset through a manual optimization process. Holding all else constant (with the default settings mentioned above), vary each of the following parameters as specified below. Train your algorithm on the training data, and evaluate the performance of your trained algorithm on the validation dataset. Here, overall accuracy is a reasonable performance metric since the classes are balanced and we don't weight one type of error as more important than the other; therefore, use the score method of the MLPClassifier for this. Create plots of accuracy vs each parameter you vary (this will result in three plots).
- Vary learning rate logarithmically from $10^{-5}$ to $10^{0}$ with 20 steps
- Vary the regularization parameter logarithmically from $10^{-8}$ to $10^2$ with 20 steps
- Vary the batch size over the following values: $[1,3,5,10,20,50,100,250,500]$
For each of these cases:
- Based on the results, report your optimal choices for each of these hyperparameters and why you selected them.
- Since neural networks can be sensitive to initialization values, you may notice these plots may be a bit noisy. Consider this when selecting the optimal values of the hyperparameters. If the noise seems significant, run the fit and score procedure multiple times (without fixing a random seed) and report the average. Rerunning the algorithm will change the initialization and therefore the output (assuming you do not set a random seed for that algorithm).
- Use the chosen hyperparameter values as the new default settings for section (c) and (d).
(b)¶
# Function to evaluate MLPClassifier over a range of hyperparameters
def evaluate_hyperparameter(
hyperparam_name, hyperparam_values, X_train, y_train, X_val, y_val
):
accuracies = []
for value in hyperparam_values:
params = {
"hidden_layer_sizes": (30, 30),
"learning_rate_init": 0.03,
"alpha": 0,
"batch_size": 50,
"max_iter": 500,
"random_state": 42,
}
params[hyperparam_name] = value
clf = MLPClassifier(**params)
clf.fit(X_train, y_train)
score = clf.score(X_val, y_val)
accuracies.append(score)
return accuracies
# 1. Vary learning rate
learning_rates = np.logspace(-5, 0, 20)
accuracies_lr = evaluate_hyperparameter(
"learning_rate_init", learning_rates, X_train, y_train, X_val, y_val
)
# 2. Vary regularization parameter
regularization_params = np.logspace(-8, 2, 20)
accuracies_reg = evaluate_hyperparameter(
"alpha", regularization_params, X_train, y_train, X_val, y_val
)
# 3. Vary batch size
batch_sizes = [1, 3, 5, 10, 20, 50, 100, 250, 500]
accuracies_batch = evaluate_hyperparameter(
"batch_size", batch_sizes, X_train, y_train, X_val, y_val
)
# Plotting accuracies
fig, axes = plt.subplots(3, 1, figsize=(8, 15))
axes[0].plot(learning_rates, accuracies_lr, marker="o")
axes[0].set_xscale("log")
axes[0].set_title("Accuracy vs Learning Rate")
axes[0].set_xlabel("Learning Rate")
axes[0].set_ylabel("Accuracy")
axes[1].plot(regularization_params, accuracies_reg, marker="o")
axes[1].set_xscale("log")
axes[1].set_title("Accuracy vs Regularization Parameter")
axes[1].set_xlabel("Regularization Parameter (alpha)")
axes[1].set_ylabel("Accuracy")
axes[2].plot(batch_sizes, accuracies_batch, marker="o")
axes[2].set_title("Accuracy vs Batch Size")
axes[2].set_xlabel("Batch Size")
axes[2].set_ylabel("Accuracy")
plt.tight_layout()
plt.show()
The learning rate plot shows hat accuracy peaks in the middle range of the learning rates and drops sharply as the learning rate approaches 1. The optimal learning rate appears to be around $10^{-2}$ where the accuracy plateaus before dropping. This value strikes a balance between convergence speed and stability.
The regularization parameter (alpha) plot shows a relativley stable accuracy as less than $10^{-1}$, and then there's a sharp decline as alpha increases. The best performance seems to be at a value just before the decline, which suggests that a small amount of regularization is beneficial but too much leads to underfitting. An optimal alpha could be around $10^{-3}$ to $10^{-2}$.
The batch size plot shows the accuracy initially increases as the batch size goes up from 1, peaks around a batch size of 5, then generally decreases or remains stable with some fluctuations. Smaller batch sizes can lead to faster learning but can be noisy. A batch size of 5 is likely a good compromise between the benefits of stochasticity and the stability of larger batches.
In this case, for the next steps in sections (c) and (d), we use the optimal hyperparameter with learning rate: $10^{-2}$ regularization parameter (alpha) between $10^{-3}$ to $10^{-2}$ and batch size as 5. These values should now be set as the new defaults when initializing the
MLPClassifierfor further analysis and training.
(c) Manual (greedy) hyperparameter tuning II: manually optimize hyperparameters that impact the model architecture. Next, we want to explore the impact of the model architecture on performance and optimize its selection. This means varying two parameters at a time instead of one as above. To do this, evaluate the validation accuracy resulting from training the model using each pair of possible numbers of nodes per layer and number of layers from the lists below. We will assume that for any given configuration the number of nodes in each layer is the same (e.g. (2,2,2), which would be a 3-layer network with 2 hidden node in each layer and (25,25) are valid, but (2,5,3) is not because the number of hidden nodes varies in each layer). Use the manually optimized values for learning rate, regularization, and batch size selected from section (b).
Number of nodes per layer: $[1,2,3,4,5,10,15,25,30]$
Number of layers = $[1,2,3,4]$ Report the accuracy of your model on the validation data. For plotting these results, use heatmaps to plot the data in two dimensions. To make the heatmaps, you can use [this code for creating heatmaps] https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html). Be sure to include the numerical values of accuracy in each grid square as shown in the linked example and label your x, y, and color axes as always. For these numerical values, round them to 2 decimal places (due to some randomness in the training process, any further precision is not typically meaningful).
When you select your optimized parameters, be sure to keep in mind that these values may be sensitive to the data and may offer the potential to have high variance for larger models. Therefore, select the model with the highest accuracy but lowest number of total model weights (all else equal, the simpler model is preferred).
What do the results show? Which parameters did you select and why?
(c)¶
# Optimized hyperparameters from section (b)
optimized_learning_rate = 1e-2
optimized_alpha = 1e-3
optimized_batch_size = 5
# Possible numbers of nodes per layer and number of layers
nodes_per_layer_options = [1, 2, 3, 4, 5, 10, 15, 25, 30]
number_of_layers_options = [1, 2, 3, 4]
# Store the accuracy for each configuration
accuracy_results = np.zeros(
(len(nodes_per_layer_options), len(number_of_layers_options))
)
# Evaluate each architecture
for i, nodes_per_layer in enumerate(nodes_per_layer_options):
for j, number_of_layers in enumerate(number_of_layers_options):
hidden_layer_sizes = (nodes_per_layer,) * number_of_layers
clf = MLPClassifier(
hidden_layer_sizes=hidden_layer_sizes,
learning_rate_init=optimized_learning_rate,
alpha=optimized_alpha,
batch_size=optimized_batch_size,
max_iter=500,
random_state=42,
)
clf.fit(X_train, y_train)
accuracy = clf.score(X_val, y_val)
accuracy_results[i][j] = accuracy
# Create a heatmap
fig, ax = plt.subplots()
cax = ax.matshow(accuracy_results, cmap="viridis")
# Add color bar
fig.colorbar(cax)
# Add numbers to the squares
for i in range(len(nodes_per_layer_options)):
for j in range(len(number_of_layers_options)):
ax.text(j, i, f"{accuracy_results[i][j]:.2f}", va="center", ha="center")
# Set axis labels
ax.set_xticklabels([""] + number_of_layers_options)
ax.set_yticklabels([""] + nodes_per_layer_options)
ax.set_xlabel("Number of Layers")
ax.set_ylabel("Number of Nodes per Layer")
plt.show()
The optimal model architecture based on the heatmap is one with 1 layer and 15 nodes, as it achieves the highest observed accuracy of 0.74 while maintaining simplicity. This choice aligns with Occam's razor, preferring the least complex model that still delivers high performance. The one-layer, 15-node configuration minimizes the total number of weights among the top-performing architectures, which helps in avoiding overfitting. Consequently, this model is selected for its balance between accuracy and simplicity.
(d) Manual (greedy) model selection and retraining. Based the optimal choice of hyperparameters, train your model with your optimized hyperparameters on all the training data AND the validation data (this is provided as X_train_plus_val and y_train_plus_val).
- Apply the trained model to the test data and report the accuracy of your final model on the test data.
- Plot an ROC curve of your performance (plot this with the curve in part (e) on the same set of axes you use for that question).
(d)¶
from sklearn.metrics import accuracy_score, roc_curve, auc
# Hyperparameters
# 1 layer with 15 nodes from (c)
optimal_hidden_layer_sizes = (15,)
optimal_learning_rate = 1e-2
optimal_alpha = 1e-3
optimal_batch_size = 5
# Train the model
clf = MLPClassifier(
hidden_layer_sizes=optimal_hidden_layer_sizes,
learning_rate_init=optimal_learning_rate,
alpha=optimal_alpha,
batch_size=optimal_batch_size,
max_iter=500,
)
clf.fit(X_train_plus_val, y_train_plus_val)
# Evaluate the model on the test data
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy on test data: {accuracy:.2f}")
# Compute ROC curve and ROC area for the test data
y_score = clf.predict_proba(X_test)[:, 1] # Get the probability for the positive class
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
# Plot the ROC curve
plt.figure()
lw = 2
plt.plot(
fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.2f)" % roc_auc
)
plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--", label="Chance")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
plt.show()
Accuracy on test data: 0.74
(e) Automated hyperparameter search through random search. The manual (greedy) approach (setting one or two parameters at a time holding the rest constant), provides good insights into how the neural network hyperparameters impacts model fitting for this particular training process. However, it is limited in one very problematic way: it depends heavily on a good "default" setting of the hyperparameters. Those were provided for you in this exercise, but are not generally know. Our manual optimization was somewhat greedy because we picked the hyperparameters one at a time rather than looking at different combinations of hyperparameters. Adopting such a pseudo-greedy approach to that manual optimization also limits our ability to more deeply search the hyperparameter space since we don't look at simultaneous changes to multiple parameters. Now we'll use a popular hyperparameter optimization tool to accomplish that: random search.
Random search is an excellent example of a hyperparameter optimization search strategy that has been shown to be more efficient (requiring fewer training runs) than another common approach: grid search. Grid search evaluates all possible combinations of hyperparameters from lists of possible hyperparameter settings - a very computationally expensive process. Yet another attractive alternative is Bayesian Optimization, which is an excellent hyperparameter optimization strategy but we will leave that to the interested reader.
Our particular random search tool will be Scikit-Learn's RandomizedSearchCV. This performs random search employing cross validation for performance evaluation (we will adjust this to ve a train/validation split).
Using RandomizedSearchCV, train on the training data while validating on the validation data (see instructions below on how to setup the train/validation split automatically). This tool will randomly pick combinations of parameter values and test them out, returning the best combination it finds as measured by performance on the validation set. You can use this example as a template for how to do this.
- To make this comparable to the training/validation setup used for the greedy optimization, we need to setup a training and validation split rather than use cross validation. To do this for
RandomSearchCVwe input the COMBINED training and validation dataset (X_train_plus_val, andy_train_plus_val) and we set thecvparameter to be thetrain_val_splitvariable we provided along with the dataset. This will setup the algorithm to make its assessments training just on the training data and evaluation on the validation data. OnceRandomSearchCVcompletes its search, it will fit the model one more time to the combined training and validation data using the optimized parameters as we would want it to. Note: The object returned by running fit (the random search) is NOT the best estimator. You can access the best estimator through the attribute.best_estimator_, assuming that you did not passrefit=False. - Set the number of iterations to at least 200 (you'll look at 200 random pairings of possible hyperparameters). You can go as high as you want, but it will take longer the larger the value.
- If you run this on Colab or any system with multiple cores, set the parameter
n_jobsto -1 to use all available cores for more efficient training through parallelization - You'll need to set the range or distribution of the parameters you want to sample from. Search over the same ranges as in previous problems. To tell the algorithm the ranges to search, use lists of values for candidate batch_size, since those need to be integers rather than a range; the
loguniformscipyfunction for setting the range of the learning rate and regularization parameter, and a list of tuples for thehidden_layer_sizesparameter, as you used in the greedy optimization. - Once the model is fit, use the
best_params_property of the fit classifier attribute to extract the optimized values of the hyperparameters and report those and compare them to what was selected through the manual, greedy optimization.
For the final generalization performance assessment:
- State the accuracy of the optimized models on the test dataset
- Plot the ROC curve corresponding to your best model on the test dataset through greedy hyperparameter section vs the model identified through random search (these curves should be on the same set of axes for comparison). In the legend of the plot, report the AUC for each curve. This should be one single graph with 3 curves (one for greedy search, one for random search, and one representing random chance). Please also provide AUC score for greedy research and random search.
- Plot the final decision boundary for the greedy and random search-based classifiers along with the test dataset to demonstrate the shape of the final boundary
- How did the generalization performance compare between the hyperparameters selected through the manual (greedy) search and the random search?
(e)¶
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import loguniform
from time import time
# Define the parameter distributions to sample from
param_distributions = {
"hidden_layer_sizes": [
(nodes,) * layers
for nodes in [1, 2, 3, 4, 5, 10, 15, 25, 30]
for layers in [1, 2, 3, 4]
],
"learning_rate_init": loguniform(1e-5, 1e-0),
"alpha": loguniform(1e-8, 1e2),
"batch_size": [1, 3, 5, 10, 20, 50, 100, 250, 500],
}
# Initialize the MLPClassifier
clf = MLPClassifier(max_iter=500)
# Set up the RandomizedSearchCV object
random_search = RandomizedSearchCV(
estimator=clf,
param_distributions=param_distributions,
n_iter=200,
cv=train_val_split,
n_jobs=-1,
random_state=42,
)
# Function to report best scores from the search
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results["rank_test_score"] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print(
"Mean validation score: {0:.3f} (std: {1:.3f})".format(
results["mean_test_score"][candidate],
results["std_test_score"][candidate],
)
)
print("Parameters: {0}".format(results["params"][candidate]))
print("")
# Perform the random search
start = time()
random_search.fit(X_train_plus_val, y_train_plus_val)
print(
"RandomizedSearchCV took %.2f seconds for %d candidate parameter settings."
% ((time() - start), random_search.n_iter)
)
# Extract the best score from the random search results
best_score_index = random_search.cv_results_["rank_test_score"].argmin()
best_score = random_search.cv_results_["mean_test_score"][best_score_index]
# Extract the best hyperparameters and the best estimator
best_hyperparams = random_search.best_params_
best_estimator = random_search.best_estimator_
print("Best Hyperparameters:", best_hyperparams)
print("Best Validation Score:", best_score)
# Evaluate on the test data
test_accuracy = best_estimator.score(X_test, y_test)
print("Test Accuracy on the test data: {:.2f}".format(test_accuracy))
# Compute ROC curve and ROC area for the test data
y_score = best_estimator.predict_proba(X_test)[
:, 1
] # Get the probability for the positive class
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
# Plot the ROC curve
plt.figure()
lw = 2
plt.plot(
fpr,
tpr,
color="darkorange",
lw=lw,
label="Random Search ROC curve (area = {:.2f})".format(roc_auc),
)
plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--", label="Chance")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic (ROC) Curve")
plt.legend(loc="lower right")
plt.show()
RandomizedSearchCV took 46.82 seconds for 200 candidate parameter settings.
Best Hyperparameters: {'alpha': 1.6626148525058852e-05, 'batch_size': 100, 'hidden_layer_sizes': (10, 10), 'learning_rate_init': 0.0004339970334001344}
Best Validation Score: 0.752
Test Accuracy on the test data: 0.74
# One layer with 15 nodes
greedy_hyperparams = {
"hidden_layer_sizes": (15,),
"learning_rate_init": 0.01,
"alpha": 0.001,
"batch_size": "auto", # Assuming default
"max_iter": 500,
}
# Fit the model
best_estimator_greedy = MLPClassifier(**greedy_hyperparams)
best_estimator_greedy.fit(X_train_plus_val, y_train_plus_val)
random_hyperparams = {
"alpha": 0.0022092000141089,
"batch_size": 3,
"hidden_layer_sizes": (4, 4),
"learning_rate_init": 0.000447195646044396,
"max_iter": 500,
}
# Fit the model
best_estimator_random = MLPClassifier(**random_hyperparams)
best_estimator_random.fit(X_train_plus_val, y_train_plus_val)
# Hypothetical predicted probabilities for the positive class
y_score_greedy = best_estimator_greedy.predict_proba(X_test)[:, 1]
y_score_random = best_estimator_random.predict_proba(X_test)[:, 1]
# Calculate ROC curve and AUC for greedy search
fpr_greedy, tpr_greedy, _ = roc_curve(y_test, y_score_greedy)
roc_auc_greedy = auc(fpr_greedy, tpr_greedy)
# Calculate ROC curve and AUC for random search
fpr_random, tpr_random, _ = roc_curve(y_test, y_score_random)
roc_auc_random = auc(fpr_random, tpr_random)
# Plot
plt.figure(figsize=(8, 6))
plt.plot(
fpr_greedy,
tpr_greedy,
color="red",
label=f"Greedy Search (AUC = {roc_auc_greedy:.2f})",
)
plt.plot(
fpr_random,
tpr_random,
color="blue",
label=f"Random Search (AUC = {roc_auc_random:.2f})",
)
plt.plot([0, 1], [0, 1], color="navy", linestyle="--", label="Chance (AUC = 0.50)")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("ROC Curve Comparison")
plt.legend(loc="lower right")
plt.show()
def plot_decision_boundary(classifier, X, y, title, resolution=0.02):
# Setup marker generator and color map
markers = ("s", "x", "o", "^", "v")
colors = ("orange", "blue", "lightgreen", "gray", "cyan")
cmap = ListedColormap(colors[: len(np.unique(y))])
# Plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(
np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)
)
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# Plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(
x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=f"Class {cl}",
edgecolor="black",
)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.legend(loc="upper left")
plt.title(title)
plt.show()
plot_decision_boundary(
best_estimator_greedy, X_test, y_test, "Decision Boundary for Greedy Search"
)
plot_decision_boundary(
best_estimator_random, X_test, y_test, "Decision Boundary for Random Search"
)
Both the greedy and random search approaches yielded models with similar generalization performance on unseen test data, as evidenced by nearly identical ROC curves with AUC scores of 0.82 and 0.81, respectively. The decision boundary plots for both models suggest comparable abilities to separate classes within the feature space. Given the marginal difference in AUC scores, the model from the greedy search might have a slight advantage in distinguishing between classes, but overall, the performance of both models indicates that either hyperparameter tuning method can be effective for this dataset.
2¶
[30 points] Build and test your own Neural Network for classification¶
There is no better way to understand how one of the core techniques of modern machine learning works than to build a simple version of it yourself. In this exercise you will construct and apply your own neural network classifier. You may use numpy if you wish but no other libraries.
(a) [10 points of the 30] Create a neural network class that follows the scikit-learn classifier convention by implementing fit, predict, and predict_proba methods. Your fit method should run backpropagation on your training data using stochastic gradient descent. Assume the activation function is a sigmoid. Choose your model architecture to have two input nodes, two hidden layers with five nodes each, and one output node.
To guide you in the right direction with this problem, please find a skeleton of a neural network class below. You absolutely MAY use additional methods beyond those suggested in this template, but the methods listed below are the minimum required to implement the model cleanly.
Strategies for debugging. One of the greatest challenges of this implementations is that there are many parts and a bug could be present in any of them. Here are some recommended tips:
- Development environment. Consider using an Integrated Development Environment (IDE). I strongly recommend the use of VS Code and the Python debugging tools in that development environment.
- Unit tests. You are strongly encouraged to create unit tests for most modules. Without doing this will make your code extremely difficult to bug. You can create simple examples to feed through the network to validate it is correctly computing activations and node values. Also, if you manually set the weights of the model, you can even calculate backpropagation by hand for some simple examples (admittedly, that unit test would be challenging and is optional, but a unit test is possible).
- Compare against a similar architecture. You can also verify the performance of your overall neural network by comparing it against the
scikit-learnimplementation and using the same architecture and parameters as your model (your model outputs will certainly not be identical, but they should be somewhat similar for similar parameter settings).
NOTE: Due to the depth this question requires, some students may choose not to complete this section (in lieu of receiving the 10 points from this question). If you choose not to build your own neural network, or if your neural network is not functional prior to submission, then use the scikit-learn implementation instead in the questions below; where it asks to compare to scikit-learn, compare against a random forest classifier instead.
(a)¶
# neural network class skeleton code
class myNeuralNetwork(object):
def __init__(self, n_in, n_layer1, n_layer2, n_out, learning_rate=0.01):
"""__init__
Class constructor: Initialize the parameters of the network including
the learning rate, layer sizes, and each of the parameters
of the model (weights, placeholders for activations, inputs,
deltas for gradients, and weight gradients). This method
should also initialize the weights of your model randomly
Input:
n_in: number of inputs
n_layer1: number of nodes in layer 1
n_layer2: number of nodes in layer 2
n_out: number of output nodes
learning_rate: learning rate for gradient descent
Output:
none
"""
# initialize input
self.n_in = n_in
self.n_layer1 = n_layer1
self.n_layer2 = n_layer2
self.n_out = n_out
self.learning_rate = learning_rate
# initialize weights with small random values for each layer
self.W1 = np.random.randn(self.n_in, self.n_layer1) * 0.01
self.W2 = np.random.randn(self.n_layer1, self.n_layer2) * 0.01
self.W_out = np.random.randn(self.n_layer2, self.n_out) * 0.01
# initialize bias
self.b1 = np.random.randn(1, self.n_layer1) * 0.01
self.b2 = np.random.randn(1, self.n_layer2) * 0.01
self.b_out = np.random.randn(1, self.n_out) * 0.01
def forward_propagation(self, x):
"""forward_propagation
Takes a vector of your input data (one sample) and feeds
it forward through the neural network, calculating activations and
layer node values along the way.
Input:
x: a vector of data representing 1 sample [n_in x 1]
Output:
y_hat: a vector (or scaler of predictions) [n_out x 1]
(typically n_out will be 1 for binary classification)
"""
# input to first hidden layer
self.a1 = np.dot(x, self.W1) + self.b1
self.z1 = self.sigmoid(self.a1)
# first hidden layer to second hidden layer
self.a2 = np.dot(self.z1, self.W2) + self.b2
self.z2 = self.sigmoid(self.a2)
# second hidden layer to output layer
self.z_out = np.dot(self.z2, self.W_out) + self.b_out
y_hat = self.sigmoid(self.z_out) # final output prediction
return y_hat
def compute_loss(self, y, y_hat):
"""compute_loss
Computes the current loss/cost function of the neural network
based on the weights and the data input into this function.
To do so, it runs the X data through the network to generate
predictions, then compares it to the target variable y using
the cost/loss function
Input:
X: A matrix of N samples of data [N x n_in]
y: Target variable [N x 1]
Output:
loss: a scalar measure of loss/cost
"""
# compute the cross-entropy loss
m = y.shape[0] # number of samples
loss = -np.sum(y * np.log(y_hat) + (1 - y) * np.log(1 - y_hat)) / m
return loss
def backpropagate(self, x, y):
"""backpropagate
Backpropagate the error from one sample determining the gradients
with respect to each of the weights in the network. The steps for
this algorithm are:
1. Run a forward pass of the model to get the activations
Corresponding to x and get the loss functionof the model
predictions compared to the target variable y
2. Compute the deltas (see lecture notes) and values of the
gradient with respect to each weight in each layer moving
backwards through the network
Input:
x: A vector of 1 samples of data [n_in x 1]
y: Target variable [scalar]
Output:
loss: a scalar measure of th loss/cost associated with x,y
and the current model weights
"""
# Step 1: Forward pass
y_hat = self.forward_propagation(x)
loss = self.compute_loss(y, y_hat)
# Step 2: Backward pass to compute gradients
# output layer to second hidden layer
dLoss_yHat = y_hat - y
dZ3 = dLoss_yHat * self.sigmoid_derivative(self.z_out)
dW_out = np.dot(self.a2.T, dZ3) / y.shape[0]
db_out = np.sum(dZ3, axis=0, keepdims=True) / y.shape[0]
# second hidden layer to first hidden layer
dA2 = np.dot(dLoss_yHat, self.W_out.T)
dZ2 = dA2 * self.sigmoid_derivative(self.z2)
dW2 = np.dot(self.a1.T, dZ2) / y.shape[0]
db2 = np.sum(dZ2, axis=0, keepdims=True) / y.shape[0]
# first hidden layer to input layer
dA1 = np.dot(dZ2, self.W2.T)
dZ1 = dA1 * self.sigmoid_derivative(self.z1)
dW1 = np.dot(x.T, dZ1) / y.shape[0]
db1 = np.sum(dZ1, axis=0, keepdims=True) / y.shape[0]
# update weights and biases
self.W1 -= self.learning_rate * dW1
self.W2 -= self.learning_rate * dW2
self.W_out -= self.learning_rate * dW_out
self.b1 -= self.learning_rate * db1
self.b2 -= self.learning_rate * db2
self.b_out -= self.learning_rate * db_out
return loss
# def stochastic_gradient_descent_step(self):
# """stochastic_gradient_descent_step [OPTIONAL - you may also do this
# directly in backpropagate]
# Using the gradient values computed by backpropagate, update each
# weight value of the model according to the familiar stochastic
# gradient descent update equation.
# Input: none
# Output: none
# """
# # Update weights and biases for each layer based on the
# # gradients computed in backpropagation and the learning rate
# # Calculate start and end indices for the current batch
# idx_start = self.cur_epoch_iter * self.batch_size
# idx_end = idx_start + self.batch_size
# # Safeguard for the last batch which might be smaller than batch_size
# idx_end = min(idx_end, len(self.X_train))
# # Extract the batch for X and y
# x_batch = self.X_train[self.idx[idx_start:idx_end]].T
# y_batch = self.y_train[:, self.idx[idx_start:idx_end]]
# # Backpropagation for the batch
# self.backpropagate(x_batch, y_batch)
def fit(
self,
X,
y,
max_epochs=100,
learning_rate=0.01,
get_validation_loss=False,
X_val=None,
y_val=None,
):
"""fit
Input:
X: A matrix of N samples of data [N x n_in]
y: Target variable [N x 1]
Output:
training_loss: Vector of training loss values at the end of each epoch
validation_loss: Vector of validation loss values at the end of each epoch
[optional output if get_validation_loss==True]
"""
training_loss = []
validation_loss = []
self.learning_rate = learning_rate
for epoch in range(max_epochs):
# Initialize total loss for the epoch
epoch_loss = 0
# Iterate over individual samples
for i in range(X.shape[0]):
# Perform a forward pass and backpropagation
x_sample = X[i].reshape(1, -1) # Reshape to ensure it's a column vector
y_sample = y[i].reshape(1, -1)
loss = self.backpropagate(x_sample, y_sample)
epoch_loss += loss
# Average the loss over all samples and store it
avg_epoch_loss = epoch_loss / X.shape[0]
training_loss.append(avg_epoch_loss)
# Optionally, compute validation loss
if get_validation_loss and X_val is not None and y_val is not None:
val_loss = 0
for i in range(X_val.shape[0]):
y_hat = self.forward_propagation(X_val[i].reshape(1, -1))
val_loss += self.compute_loss(y_val[i].reshape(1, -1), y_hat)
validation_loss.append(val_loss / X_val.shape[0])
# Print progress
print(
f"Epoch {epoch+1}/{max_epochs}, Training Loss: {avg_epoch_loss}", end=""
)
if get_validation_loss:
print(f", Validation Loss: {validation_loss[-1]}", end="")
# Return the training and optionally validation loss
return training_loss, validation_loss
def predict_proba(self, X):
"""predict_proba
Compute the output of the neural network for each sample in X, with the last layer's
sigmoid activation providing an estimate of the target output between 0 and 1
Input:
X: A matrix of N samples of data [N x n_in]
Output:
y_hat: A vector of class predictions between 0 and 1 [N x 1]
"""
# Using list comprehension for a concise format
y_hat = [self.forward_propagation(x.reshape(1, self.n_in)) for x in X]
return np.array(y_hat).flatten()
def predict(self, X, decision_thresh=0.5):
"""predict
Compute the output of the neural network prediction for
each sample in X, with the last layer's sigmoid activation
providing an estimate of the target output between 0 and 1,
then thresholding that prediction based on decision_thresh
to produce a binary class prediction
Input:
X: A matrix of N samples of data [N x n_in]
decision_threshold: threshold for the class confidence score
of predict_proba for binarizing the output
Output:
y_hat: A vector of class predictions of either 0 or 1 [N x 1]
"""
# Directly using list comprehension and np.array for
# conversion and condition application
y_hat_proba = self.predict_proba(X)
y_hat = np.array([1 if y >= decision_thresh else 0 for y in y_hat_proba])
return y_hat
def sigmoid(self, X):
"""sigmoid
Compute the sigmoid function for each value in matrix X
Input:
X: A matrix of any size [m x n]
Output:
X_sigmoid: A matrix [m x n] where each entry corresponds to the
entry of X after applying the sigmoid function
"""
# Our activation function: f(x) = 1 / (1 + e^(-x))
return 1 / (1 + np.exp(-X))
def sigmoid_derivative(self, X):
"""sigmoid_derivative
Compute the sigmoid derivative function for each value in matrix X
Input:
X: A matrix of any size [m x n]
Output:
X_sigmoid: A matrix [m x n] where each entry corresponds to the
entry of X after applying the sigmoid derivative function
"""
return self.sigmoid(X) * (1 - self.sigmoid(X))
(b) Apply your neural network.
- Create training, validation, and test datasets using
sklearn.datasets.make_moons(N, noise=0.20)data, where $N_{train} = 500$ and $N_{test} = 100$. The validation dataset should be a portion of your training dataset that you hold out for hyperparameter tuning. - Cost function plots. Train and validate your model on this dataset plotting your training and validation cost learning curves on the same set of axes. This is the training and validation error for each epoch of stochastic gradient descent, where an epoch represents having trained on each of the training samples one time.
- Tune the learning rate and number of training epochs for your model to improve performance as needed. You're free to use any methods you deem fit to tune your hyperparameters like grid search, random search, Bayesian optimization etc.
- Decision boundary plots. In two subplots, plot the training data on one subplot and the validation data on the other subplot. On each plot, also plot the decision boundary from your neural network trained on the training data.
- ROC Curve plots. Report your performance on the test data with an ROC curve and the corresponding AUC score. Compare against the
scikit-learnMLPClassifiertrained with the same parameters on the same set of axes and include the chance diagonal. Note: if you chose not to build your own neural network in part (a) above, or if your neural network is not functional prior to submission, then use thescikit-learnMLPClassifierclass instead for the neural network and compare it against a random forest classifier instead. Be sure to set the hidden layer sizes, epochs, and learning rate for that model, if so. - Remember to retrain your model. After selecting your hyperparameters using the validation data set, when evaluating the final performance on the ROC curve, it's good practice to retrain your model with the selected hyperparameters on the train + validation dataset, before evaluating on the test data.
Note if you opted not to build your own neural network: in this case, for hyperparameter tuning, we recommend using the partial_fit method to train your model for every epoch. Partial fit allows you to incrementally fit on one sample at a time.
(b)¶
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
N_train = 500
N_test = 100
X, y = make_moons(N_train, noise=0.20, random_state=42)
X_train_full, X_test, y_train_full, y_test = train_test_split(
X, y, test_size=N_test / N_train, random_state=42
)
X_train, X_val, y_train, y_val = train_test_split(
X_train_full, y_train_full, test_size=0.2, random_state=42
)
# 2. Train the Model and Plot Cost Function
# Initialize the neural network
import matplotlib.pyplot as plt
nn = myNeuralNetwork(n_in=2, n_layer1=5, n_layer2=5, n_out=1, learning_rate=0.01)
# Train the model and collect training and validation losses
training_loss, validation_loss = nn.fit(
X_train,
y_train,
max_epochs=500,
learning_rate=0.01,
get_validation_loss=True,
X_val=X_val,
y_val=y_val,
)
# Plotting the cost function
plt.plot(training_loss, label="Training Loss")
plt.plot(validation_loss, label="Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.title("Training and Validation Loss Curves")
plt.show()
Epoch 1/500, Training Loss: 0.6933302157706667, Validation Loss: 0.6929975834561446Epoch 2/500, Training Loss: 0.6931272418061554, Validation Loss: 0.6929079837167247Epoch 3/500, Training Loss: 0.6929934527356484, Validation Loss: 0.6928609116068806Epoch 4/500, Training Loss: 0.6929055950296545, Validation Loss: 0.6928399611869623Epoch 5/500, Training Loss: 0.6928481719871553, Validation Loss: 0.6928345789147969Epoch 6/500, Training Loss: 0.6928108684943634, Validation Loss: 0.6928380505181875Epoch 7/500, Training Loss: 0.6927868270370511, Validation Loss: 0.6928461671291084Epoch 8/500, Training Loss: 0.6927714953383177, Validation Loss: 0.6928563443568407Epoch 9/500, Training Loss: 0.6927618568160383, Validation Loss: 0.6928670424759331Epoch 10/500, Training Loss: 0.6927559170164004, Validation Loss: 0.6928773863061626Epoch 11/500, Training Loss: 0.6927523610563976, Validation Loss: 0.6928869172175649Epoch 12/500, Training Loss: 0.6927503252530266, Validation Loss: 0.6928954323371554Epoch 13/500, Training Loss: 0.6927492449722303, Validation Loss: 0.6929028811380363Epoch 14/500, Training Loss: 0.692748753340267, Validation Loss: 0.6929092996492724Epoch 15/500, Training Loss: 0.6927486138853584, Validation Loss: 0.6929147692140021Epoch 16/500, Training Loss: 0.6927486758045379, Validation Loss: 0.6929193911673163Epoch 17/500, Training Loss: 0.6927488443084961, Validation Loss: 0.69292327175473Epoch 18/500, Training Loss: 0.6927490610067639, Validation Loss: 0.6929265135668599Epoch 19/500, Training Loss: 0.6927492909715919, Validation Loss: 0.6929292110593398Epoch 20/500, Training Loss: 0.6927495142382483, Validation Loss: 0.692931448581075Epoch 21/500, Training Loss: 0.692749720246969, Validation Loss: 0.6929332998962285Epoch 22/500, Training Loss: 0.6927499042309501, Validation Loss: 0.6929348285542438Epoch 23/500, Training Loss: 0.692750064887939, Validation Loss: 0.6929360887030478Epoch 24/500, Training Loss: 0.6927502028953298, Validation Loss: 0.6929371260968534Epoch 25/500, Training Loss: 0.6927503199768645, Validation Loss: 0.6929379791505658Epoch 26/500, Training Loss: 0.692750418327829, Validation Loss: 0.6929386799567799Epoch 27/500, Training Loss: 0.6927505002713592, Validation Loss: 0.6929392552215206Epoch 28/500, Training Loss: 0.6927505680621169, Validation Loss: 0.6929397270995004Epoch 29/500, Training Loss: 0.6927506237825974, Validation Loss: 0.6929401139243496Epoch 30/500, Training Loss: 0.6927506692964746, Validation Loss: 0.6929404308376024Epoch 31/500, Training Loss: 0.6927507062360501, Validation Loss: 0.6929406903245416Epoch 32/500, Training Loss: 0.692750736009166, Validation Loss: 0.6929409026669329Epoch 33/500, Training Loss: 0.6927507598164028, Validation Loss: 0.6929410763231043Epoch 34/500, Training Loss: 0.6927507786728865, Validation Loss: 0.692941218245449Epoch 35/500, Training Loss: 0.6927507934313317, Validation Loss: 0.692941334144624Epoch 36/500, Training Loss: 0.6927508048043894, Validation Loss: 0.6929414287087126Epoch 37/500, Training Loss: 0.6927508133852708, Validation Loss: 0.6929415057845818Epoch 38/500, Training Loss: 0.6927508196662033, Validation Loss: 0.6929415685276594Epoch 39/500, Training Loss: 0.6927508240545881, Validation Loss: 0.6929416195254305Epoch 40/500, Training Loss: 0.6927508268869367, Validation Loss: 0.6929416608991387Epoch 41/500, Training Loss: 0.6927508284407495, Validation Loss: 0.692941694387449Epoch 42/500, Training Loss: 0.6927508289445643, Validation Loss: 0.6929417214152129Epoch 43/500, Training Loss: 0.6927508285863924, Validation Loss: 0.6929417431499502Epoch 44/500, Training Loss: 0.69275082752077, Validation Loss: 0.6929417605482209Epoch 45/500, Training Loss: 0.6927508258746267, Validation Loss: 0.6929417743936697Epoch 46/500, Training Loss: 0.6927508237521572, Validation Loss: 0.6929417853282378Epoch 47/500, Training Loss: 0.6927508212388517, Validation Loss: 0.6929417938777532Epoch 48/500, Training Loss: 0.6927508184048234, Validation Loss: 0.6929418004729087Epoch 49/500, Training Loss: 0.692750815307556, Validation Loss: 0.6929418054664624Epoch 50/500, Training Loss: 0.6927508119941665, Validation Loss: 0.6929418091473265Epoch 51/500, Training Loss: 0.6927508085032609, Validation Loss: 0.6929418117521149Epoch 52/500, Training Loss: 0.6927508048664669, Validation Loss: 0.6929418134746049Epoch 53/500, Training Loss: 0.692750801109689, Validation Loss: 0.6929418144734889Epoch 54/500, Training Loss: 0.6927507972541389, Validation Loss: 0.6929418148787206Epoch 55/500, Training Loss: 0.6927507933171848, Validation Loss: 0.6929418147967182Epoch 56/500, Training Loss: 0.6927507893130407, Validation Loss: 0.6929418143146229Epoch 57/500, Training Loss: 0.6927507852533383, Validation Loss: 0.6929418135037891Epoch 58/500, Training Loss: 0.6927507811475945, Validation Loss: 0.6929418124226421Epoch 59/500, Training Loss: 0.6927507770035927, Validation Loss: 0.692941811119017Epoch 60/500, Training Loss: 0.6927507728276942, Validation Loss: 0.6929418096320779Epoch 61/500, Training Loss: 0.6927507686250995, Validation Loss: 0.6929418079938856Epoch 62/500, Training Loss: 0.6927507644000537, Validation Loss: 0.6929418062306857Epoch 63/500, Training Loss: 0.692750760156024, Validation Loss: 0.6929418043639598Epoch 64/500, Training Loss: 0.6927507558958367, Validation Loss: 0.6929418024112881Epoch 65/500, Training Loss: 0.6927507516217937, Validation Loss: 0.6929418003870561Epoch 66/500, Training Loss: 0.6927507473357694, Validation Loss: 0.6929417983030324Epoch 67/500, Training Loss: 0.6927507430392861, Validation Loss: 0.6929417961688429Epoch 68/500, Training Loss: 0.69275073873358, Validation Loss: 0.6929417939923586Epoch 69/500, Training Loss: 0.6927507344196487, Validation Loss: 0.6929417917800131Epoch 70/500, Training Loss: 0.6927507300982987, Validation Loss: 0.6929417895370622Epoch 71/500, Training Loss: 0.6927507257701784, Validation Loss: 0.692941787267798Epoch 72/500, Training Loss: 0.6927507214358039, Validation Loss: 0.6929417849757227Epoch 73/500, Training Loss: 0.6927507170955884, Validation Loss: 0.6929417826636917Epoch 74/500, Training Loss: 0.6927507127498574, Validation Loss: 0.6929417803340311Epoch 75/500, Training Loss: 0.6927507083988644, Validation Loss: 0.6929417779886325Epoch 76/500, Training Loss: 0.6927507040428027, Validation Loss: 0.6929417756290317Epoch 77/500, Training Loss: 0.6927506996818211, Validation Loss: 0.6929417732564735Epoch 78/500, Training Loss: 0.6927506953160267, Validation Loss: 0.6929417708719652Epoch 79/500, Training Loss: 0.6927506909454942, Validation Loss: 0.6929417684763166Epoch 80/500, Training Loss: 0.6927506865702745, Validation Loss: 0.6929417660701783Epoch 81/500, Training Loss: 0.6927506821903943, Validation Loss: 0.6929417636540702Epoch 82/500, Training Loss: 0.6927506778058614, Validation Loss: 0.6929417612284041Epoch 83/500, Training Loss: 0.6927506734166716, Validation Loss: 0.6929417587935026Epoch 84/500, Training Loss: 0.6927506690228079, Validation Loss: 0.6929417563496174Epoch 85/500, Training Loss: 0.6927506646242413, Validation Loss: 0.6929417538969396Epoch 86/500, Training Loss: 0.6927506602209373, Validation Loss: 0.6929417514356125Epoch 87/500, Training Loss: 0.6927506558128538, Validation Loss: 0.6929417489657398Epoch 88/500, Training Loss: 0.6927506513999412, Validation Loss: 0.6929417464873902Epoch 89/500, Training Loss: 0.6927506469821478, Validation Loss: 0.6929417440006075Epoch 90/500, Training Loss: 0.6927506425594154, Validation Loss: 0.692941741505413Epoch 91/500, Training Loss: 0.6927506381316854, Validation Loss: 0.6929417390018081Epoch 92/500, Training Loss: 0.6927506336988937, Validation Loss: 0.6929417364897807Epoch 93/500, Training Loss: 0.6927506292609752, Validation Loss: 0.6929417339693059Epoch 94/500, Training Loss: 0.6927506248178631, Validation Loss: 0.6929417314403485Epoch 95/500, Training Loss: 0.6927506203694896, Validation Loss: 0.6929417289028635Epoch 96/500, Training Loss: 0.692750615915782, Validation Loss: 0.692941726356801Epoch 97/500, Training Loss: 0.6927506114566699, Validation Loss: 0.6929417238021023Epoch 98/500, Training Loss: 0.6927506069920802, Validation Loss: 0.6929417212387066Epoch 99/500, Training Loss: 0.6927506025219385, Validation Loss: 0.6929417186665471Epoch 100/500, Training Loss: 0.6927505980461717, Validation Loss: 0.6929417160855534Epoch 101/500, Training Loss: 0.6927505935647014, Validation Loss: 0.6929417134956534Epoch 102/500, Training Loss: 0.692750589077454, Validation Loss: 0.6929417108967708Epoch 103/500, Training Loss: 0.6927505845843509, Validation Loss: 0.6929417082888281Epoch 104/500, Training Loss: 0.6927505800853142, Validation Loss: 0.6929417056717452Epoch 105/500, Training Loss: 0.6927505755802676, Validation Loss: 0.6929417030454406Epoch 106/500, Training Loss: 0.6927505710691293, Validation Loss: 0.6929417004098312Epoch 107/500, Training Loss: 0.692750566551821, Validation Loss: 0.6929416977648319Epoch 108/500, Training Loss: 0.6927505620282629, Validation Loss: 0.6929416951103577Epoch 109/500, Training Loss: 0.6927505574983739, Validation Loss: 0.6929416924463206Epoch 110/500, Training Loss: 0.6927505529620724, Validation Loss: 0.6929416897726333Epoch 111/500, Training Loss: 0.6927505484192769, Validation Loss: 0.6929416870892056Epoch 112/500, Training Loss: 0.6927505438699046, Validation Loss: 0.6929416843959477Epoch 113/500, Training Loss: 0.6927505393138741, Validation Loss: 0.6929416816927688Epoch 114/500, Training Loss: 0.6927505347511006, Validation Loss: 0.6929416789795761Epoch 115/500, Training Loss: 0.6927505301814998, Validation Loss: 0.6929416762562769Epoch 116/500, Training Loss: 0.6927505256049882, Validation Loss: 0.6929416735227771Epoch 117/500, Training Loss: 0.6927505210214798, Validation Loss: 0.6929416707789825Epoch 118/500, Training Loss: 0.6927505164308895, Validation Loss: 0.6929416680247967Epoch 119/500, Training Loss: 0.6927505118331314, Validation Loss: 0.6929416652601235Epoch 120/500, Training Loss: 0.692750507228119, Validation Loss: 0.6929416624848658Epoch 121/500, Training Loss: 0.6927505026157629, Validation Loss: 0.6929416596989247Epoch 122/500, Training Loss: 0.6927504979959768, Validation Loss: 0.6929416569022019Epoch 123/500, Training Loss: 0.6927504933686731, Validation Loss: 0.6929416540945972Epoch 124/500, Training Loss: 0.6927504887337605, Validation Loss: 0.6929416512760089Epoch 125/500, Training Loss: 0.6927504840911514, Validation Loss: 0.6929416484463371Epoch 126/500, Training Loss: 0.6927504794407537, Validation Loss: 0.6929416456054771Epoch 127/500, Training Loss: 0.6927504747824776, Validation Loss: 0.692941642753327Epoch 128/500, Training Loss: 0.6927504701162314, Validation Loss: 0.6929416398897821Epoch 129/500, Training Loss: 0.692750465441923, Validation Loss: 0.692941637014737Epoch 130/500, Training Loss: 0.6927504607594608, Validation Loss: 0.6929416341280852Epoch 131/500, Training Loss: 0.692750456068749, Validation Loss: 0.6929416312297201Epoch 132/500, Training Loss: 0.6927504513696959, Validation Loss: 0.6929416283195334Epoch 133/500, Training Loss: 0.6927504466622055, Validation Loss: 0.6929416253974161Epoch 134/500, Training Loss: 0.6927504419461825, Validation Loss: 0.6929416224632592Epoch 135/500, Training Loss: 0.6927504372215317, Validation Loss: 0.6929416195169502Epoch 136/500, Training Loss: 0.6927504324881556, Validation Loss: 0.692941616558378Epoch 137/500, Training Loss: 0.6927504277459572, Validation Loss: 0.6929416135874302Epoch 138/500, Training Loss: 0.6927504229948385, Validation Loss: 0.6929416106039924Epoch 139/500, Training Loss: 0.6927504182347004, Validation Loss: 0.6929416076079493Epoch 140/500, Training Loss: 0.6927504134654441, Validation Loss: 0.6929416045991863Epoch 141/500, Training Loss: 0.6927504086869687, Validation Loss: 0.692941601577585Epoch 142/500, Training Loss: 0.6927504038991742, Validation Loss: 0.6929415985430285Epoch 143/500, Training Loss: 0.6927503991019577, Validation Loss: 0.692941595495397Epoch 144/500, Training Loss: 0.6927503942952178, Validation Loss: 0.6929415924345708Epoch 145/500, Training Loss: 0.6927503894788509, Validation Loss: 0.6929415893604285Epoch 146/500, Training Loss: 0.6927503846527538, Validation Loss: 0.6929415862728479Epoch 147/500, Training Loss: 0.6927503798168212, Validation Loss: 0.6929415831717056Epoch 148/500, Training Loss: 0.6927503749709478, Validation Loss: 0.6929415800568771Epoch 149/500, Training Loss: 0.6927503701150262, Validation Loss: 0.6929415769282361Epoch 150/500, Training Loss: 0.6927503652489502, Validation Loss: 0.6929415737856565Epoch 151/500, Training Loss: 0.692750360372613, Validation Loss: 0.6929415706290099Epoch 152/500, Training Loss: 0.6927503554859034, Validation Loss: 0.6929415674581672Epoch 153/500, Training Loss: 0.6927503505887149, Validation Loss: 0.6929415642729975Epoch 154/500, Training Loss: 0.6927503456809349, Validation Loss: 0.6929415610733696Epoch 155/500, Training Loss: 0.6927503407624521, Validation Loss: 0.6929415578591503Epoch 156/500, Training Loss: 0.6927503358331574, Validation Loss: 0.6929415546302058Epoch 157/500, Training Loss: 0.6927503308929329, Validation Loss: 0.692941551386401Epoch 158/500, Training Loss: 0.6927503259416686, Validation Loss: 0.6929415481275986Epoch 159/500, Training Loss: 0.6927503209792476, Validation Loss: 0.6929415448536607Epoch 160/500, Training Loss: 0.6927503160055555, Validation Loss: 0.6929415415644484Epoch 161/500, Training Loss: 0.6927503110204756, Validation Loss: 0.6929415382598207Epoch 162/500, Training Loss: 0.6927503060238901, Validation Loss: 0.6929415349396358Epoch 163/500, Training Loss: 0.6927503010156796, Validation Loss: 0.6929415316037505Epoch 164/500, Training Loss: 0.6927502959957251, Validation Loss: 0.6929415282520199Epoch 165/500, Training Loss: 0.6927502909639075, Validation Loss: 0.6929415248842982Epoch 166/500, Training Loss: 0.692750285920104, Validation Loss: 0.6929415215004376Epoch 167/500, Training Loss: 0.6927502808641923, Validation Loss: 0.6929415181002895Epoch 168/500, Training Loss: 0.6927502757960486, Validation Loss: 0.6929415146837034Epoch 169/500, Training Loss: 0.6927502707155497, Validation Loss: 0.6929415112505273Epoch 170/500, Training Loss: 0.6927502656225695, Validation Loss: 0.6929415078006077Epoch 171/500, Training Loss: 0.6927502605169823, Validation Loss: 0.6929415043337908Epoch 172/500, Training Loss: 0.6927502553986586, Validation Loss: 0.6929415008499185Epoch 173/500, Training Loss: 0.6927502502674713, Validation Loss: 0.6929414973488339Epoch 174/500, Training Loss: 0.6927502451232904, Validation Loss: 0.692941493830378Epoch 175/500, Training Loss: 0.6927502399659858, Validation Loss: 0.6929414902943887Epoch 176/500, Training Loss: 0.6927502347954243, Validation Loss: 0.6929414867407042Epoch 177/500, Training Loss: 0.6927502296114721, Validation Loss: 0.6929414831691598Epoch 178/500, Training Loss: 0.6927502244139967, Validation Loss: 0.6929414795795896Epoch 179/500, Training Loss: 0.6927502192028634, Validation Loss: 0.6929414759718259Epoch 180/500, Training Loss: 0.6927502139779336, Validation Loss: 0.6929414723456995Epoch 181/500, Training Loss: 0.6927502087390702, Validation Loss: 0.6929414687010395Epoch 182/500, Training Loss: 0.6927502034861349, Validation Loss: 0.6929414650376731Epoch 183/500, Training Loss: 0.6927501982189874, Validation Loss: 0.6929414613554252Epoch 184/500, Training Loss: 0.6927501929374844, Validation Loss: 0.6929414576541212Epoch 185/500, Training Loss: 0.6927501876414865, Validation Loss: 0.6929414539335811Epoch 186/500, Training Loss: 0.6927501823308485, Validation Loss: 0.6929414501936261Epoch 187/500, Training Loss: 0.692750177005424, Validation Loss: 0.6929414464340742Epoch 188/500, Training Loss: 0.6927501716650684, Validation Loss: 0.6929414426547421Epoch 189/500, Training Loss: 0.6927501663096326, Validation Loss: 0.6929414388554436Epoch 190/500, Training Loss: 0.692750160938968, Validation Loss: 0.6929414350359915Epoch 191/500, Training Loss: 0.6927501555529236, Validation Loss: 0.6929414311961969Epoch 192/500, Training Loss: 0.692750150151348, Validation Loss: 0.6929414273358676Epoch 193/500, Training Loss: 0.6927501447340895, Validation Loss: 0.692941423454811Epoch 194/500, Training Loss: 0.6927501393009902, Validation Loss: 0.6929414195528314Epoch 195/500, Training Loss: 0.692750133851896, Validation Loss: 0.6929414156297315Epoch 196/500, Training Loss: 0.69275012838665, Validation Loss: 0.6929414116853115Epoch 197/500, Training Loss: 0.6927501229050923, Validation Loss: 0.6929414077193703Epoch 198/500, Training Loss: 0.692750117407062, Validation Loss: 0.6929414037317038Epoch 199/500, Training Loss: 0.6927501118923994, Validation Loss: 0.6929413997221059Epoch 200/500, Training Loss: 0.6927501063609381, Validation Loss: 0.6929413956903687Epoch 201/500, Training Loss: 0.6927501008125153, Validation Loss: 0.6929413916362819Epoch 202/500, Training Loss: 0.6927500952469636, Validation Loss: 0.692941387559633Epoch 203/500, Training Loss: 0.6927500896641152, Validation Loss: 0.6929413834602067Epoch 204/500, Training Loss: 0.6927500840638005, Validation Loss: 0.6929413793377865Epoch 205/500, Training Loss: 0.6927500784458475, Validation Loss: 0.6929413751921524Epoch 206/500, Training Loss: 0.6927500728100846, Validation Loss: 0.6929413710230816Epoch 207/500, Training Loss: 0.6927500671563358, Validation Loss: 0.692941366830352Epoch 208/500, Training Loss: 0.6927500614844243, Validation Loss: 0.6929413626137351Epoch 209/500, Training Loss: 0.6927500557941745, Validation Loss: 0.6929413583730023Epoch 210/500, Training Loss: 0.6927500500854052, Validation Loss: 0.6929413541079217Epoch 211/500, Training Loss: 0.6927500443579346, Validation Loss: 0.692941349818259Epoch 212/500, Training Loss: 0.6927500386115797, Validation Loss: 0.6929413455037776Epoch 213/500, Training Loss: 0.6927500328461559, Validation Loss: 0.6929413411642378Epoch 214/500, Training Loss: 0.6927500270614764, Validation Loss: 0.692941336799398Epoch 215/500, Training Loss: 0.6927500212573509, Validation Loss: 0.6929413324090123Epoch 216/500, Training Loss: 0.6927500154335905, Validation Loss: 0.6929413279928346Epoch 217/500, Training Loss: 0.6927500095900027, Validation Loss: 0.6929413235506137Epoch 218/500, Training Loss: 0.6927500037263918, Validation Loss: 0.6929413190820968Epoch 219/500, Training Loss: 0.6927499978425637, Validation Loss: 0.6929413145870279Epoch 220/500, Training Loss: 0.6927499919383171, Validation Loss: 0.6929413100651487Epoch 221/500, Training Loss: 0.6927499860134535, Validation Loss: 0.6929413055161968Epoch 222/500, Training Loss: 0.6927499800677704, Validation Loss: 0.6929413009399084Epoch 223/500, Training Loss: 0.6927499741010619, Validation Loss: 0.692941296336015Epoch 224/500, Training Loss: 0.6927499681131234, Validation Loss: 0.6929412917042465Epoch 225/500, Training Loss: 0.6927499621037461, Validation Loss: 0.6929412870443291Epoch 226/500, Training Loss: 0.6927499560727182, Validation Loss: 0.6929412823559857Epoch 227/500, Training Loss: 0.6927499500198265, Validation Loss: 0.6929412776389366Epoch 228/500, Training Loss: 0.692749943944857, Validation Loss: 0.6929412728928986Epoch 229/500, Training Loss: 0.6927499378475913, Validation Loss: 0.6929412681175846Epoch 230/500, Training Loss: 0.692749931727811, Validation Loss: 0.6929412633127059Epoch 231/500, Training Loss: 0.6927499255852922, Validation Loss: 0.6929412584779683Epoch 232/500, Training Loss: 0.6927499194198132, Validation Loss: 0.6929412536130758Epoch 233/500, Training Loss: 0.6927499132311448, Validation Loss: 0.6929412487177282Epoch 234/500, Training Loss: 0.6927499070190591, Validation Loss: 0.6929412437916224Epoch 235/500, Training Loss: 0.6927499007833251, Validation Loss: 0.6929412388344509Epoch 236/500, Training Loss: 0.6927498945237083, Validation Loss: 0.6929412338459038Epoch 237/500, Training Loss: 0.6927498882399717, Validation Loss: 0.6929412288256656Epoch 238/500, Training Loss: 0.6927498819318776, Validation Loss: 0.6929412237734198Epoch 239/500, Training Loss: 0.6927498755991836, Validation Loss: 0.6929412186888441Epoch 240/500, Training Loss: 0.6927498692416457, Validation Loss: 0.6929412135716129Epoch 241/500, Training Loss: 0.6927498628590174, Validation Loss: 0.6929412084213968Epoch 242/500, Training Loss: 0.692749856451049, Validation Loss: 0.6929412032378627Epoch 243/500, Training Loss: 0.6927498500174881, Validation Loss: 0.6929411980206736Epoch 244/500, Training Loss: 0.6927498435580804, Validation Loss: 0.6929411927694875Epoch 245/500, Training Loss: 0.692749837072568, Validation Loss: 0.6929411874839596Epoch 246/500, Training Loss: 0.6927498305606894, Validation Loss: 0.6929411821637403Epoch 247/500, Training Loss: 0.6927498240221832, Validation Loss: 0.6929411768084759Epoch 248/500, Training Loss: 0.6927498174567817, Validation Loss: 0.6929411714178086Epoch 249/500, Training Loss: 0.692749810864216, Validation Loss: 0.6929411659913757Epoch 250/500, Training Loss: 0.6927498042442135, Validation Loss: 0.6929411605288102Epoch 251/500, Training Loss: 0.6927497975964989, Validation Loss: 0.6929411550297422Epoch 252/500, Training Loss: 0.6927497909207948, Validation Loss: 0.6929411494937945Epoch 253/500, Training Loss: 0.6927497842168172, Validation Loss: 0.6929411439205884Epoch 254/500, Training Loss: 0.6927497774842838, Validation Loss: 0.6929411383097374Epoch 255/500, Training Loss: 0.6927497707229058, Validation Loss: 0.6929411326608521Epoch 256/500, Training Loss: 0.6927497639323927, Validation Loss: 0.6929411269735386Epoch 257/500, Training Loss: 0.6927497571124495, Validation Loss: 0.6929411212473974Epoch 258/500, Training Loss: 0.692749750262777, Validation Loss: 0.6929411154820242Epoch 259/500, Training Loss: 0.6927497433830759, Validation Loss: 0.6929411096770094Epoch 260/500, Training Loss: 0.6927497364730394, Validation Loss: 0.6929411038319383Epoch 261/500, Training Loss: 0.6927497295323618, Validation Loss: 0.6929410979463924Epoch 262/500, Training Loss: 0.6927497225607281, Validation Loss: 0.6929410920199455Epoch 263/500, Training Loss: 0.6927497155578259, Validation Loss: 0.6929410860521685Epoch 264/500, Training Loss: 0.692749708523335, Validation Loss: 0.692941080042625Epoch 265/500, Training Loss: 0.6927497014569303, Validation Loss: 0.6929410739908742Epoch 266/500, Training Loss: 0.6927496943582887, Validation Loss: 0.6929410678964694Epoch 267/500, Training Loss: 0.6927496872270772, Validation Loss: 0.6929410617589581Epoch 268/500, Training Loss: 0.6927496800629614, Validation Loss: 0.6929410555778831Epoch 269/500, Training Loss: 0.6927496728656053, Validation Loss: 0.6929410493527788Epoch 270/500, Training Loss: 0.6927496656346629, Validation Loss: 0.6929410430831768Epoch 271/500, Training Loss: 0.6927496583697905, Validation Loss: 0.692941036768601Epoch 272/500, Training Loss: 0.6927496510706372, Validation Loss: 0.6929410304085688Epoch 273/500, Training Loss: 0.6927496437368467, Validation Loss: 0.6929410240025923Epoch 274/500, Training Loss: 0.6927496363680603, Validation Loss: 0.6929410175501765Epoch 275/500, Training Loss: 0.6927496289639156, Validation Loss: 0.6929410110508213Epoch 276/500, Training Loss: 0.6927496215240438, Validation Loss: 0.692941004504019Epoch 277/500, Training Loss: 0.6927496140480734, Validation Loss: 0.6929409979092552Epoch 278/500, Training Loss: 0.692749606535626, Validation Loss: 0.6929409912660092Epoch 279/500, Training Loss: 0.6927495989863222, Validation Loss: 0.692940984573754Epoch 280/500, Training Loss: 0.6927495913997743, Validation Loss: 0.6929409778319544Epoch 281/500, Training Loss: 0.6927495837755919, Validation Loss: 0.6929409710400691Epoch 282/500, Training Loss: 0.6927495761133788, Validation Loss: 0.6929409641975497Epoch 283/500, Training Loss: 0.6927495684127354, Validation Loss: 0.6929409573038401Epoch 284/500, Training Loss: 0.6927495606732552, Validation Loss: 0.6929409503583771Epoch 285/500, Training Loss: 0.6927495528945276, Validation Loss: 0.6929409433605901Epoch 286/500, Training Loss: 0.6927495450761378, Validation Loss: 0.6929409363099003Epoch 287/500, Training Loss: 0.6927495372176633, Validation Loss: 0.6929409292057214Epoch 288/500, Training Loss: 0.6927495293186794, Validation Loss: 0.6929409220474599Epoch 289/500, Training Loss: 0.692749521378752, Validation Loss: 0.6929409148345134Epoch 290/500, Training Loss: 0.6927495133974462, Validation Loss: 0.6929409075662727Epoch 291/500, Training Loss: 0.6927495053743187, Validation Loss: 0.6929409002421183Epoch 292/500, Training Loss: 0.6927494973089208, Validation Loss: 0.6929408928614241Epoch 293/500, Training Loss: 0.6927494892007983, Validation Loss: 0.6929408854235548Epoch 294/500, Training Loss: 0.6927494810494913, Validation Loss: 0.6929408779278665Epoch 295/500, Training Loss: 0.692749472854535, Validation Loss: 0.6929408703737069Epoch 296/500, Training Loss: 0.6927494646154567, Validation Loss: 0.692940862760414Epoch 297/500, Training Loss: 0.6927494563317774, Validation Loss: 0.6929408550873172Epoch 298/500, Training Loss: 0.6927494480030152, Validation Loss: 0.6929408473537364Epoch 299/500, Training Loss: 0.6927494396286772, Validation Loss: 0.692940839558982Epoch 300/500, Training Loss: 0.692749431208269, Validation Loss: 0.6929408317023561Epoch 301/500, Training Loss: 0.692749422741285, Validation Loss: 0.6929408237831491Epoch 302/500, Training Loss: 0.692749414227216, Validation Loss: 0.6929408158006438Epoch 303/500, Training Loss: 0.6927494056655448, Validation Loss: 0.6929408077541105Epoch 304/500, Training Loss: 0.692749397055749, Validation Loss: 0.6929407996428109Epoch 305/500, Training Loss: 0.6927493883972964, Validation Loss: 0.692940791465997Epoch 306/500, Training Loss: 0.6927493796896502, Validation Loss: 0.6929407832229076Epoch 307/500, Training Loss: 0.6927493709322665, Validation Loss: 0.6929407749127734Epoch 308/500, Training Loss: 0.692749362124591, Validation Loss: 0.692940766534813Epoch 309/500, Training Loss: 0.6927493532660663, Validation Loss: 0.6929407580882342Epoch 310/500, Training Loss: 0.6927493443561235, Validation Loss: 0.6929407495722331Epoch 311/500, Training Loss: 0.6927493353941891, Validation Loss: 0.6929407409859952Epoch 312/500, Training Loss: 0.6927493263796787, Validation Loss: 0.6929407323286937Epoch 313/500, Training Loss: 0.6927493173120035, Validation Loss: 0.6929407235994895Epoch 314/500, Training Loss: 0.6927493081905645, Validation Loss: 0.692940714797533Epoch 315/500, Training Loss: 0.6927492990147535, Validation Loss: 0.6929407059219604Epoch 316/500, Training Loss: 0.6927492897839572, Validation Loss: 0.692940696971897Epoch 317/500, Training Loss: 0.6927492804975502, Validation Loss: 0.6929406879464548Epoch 318/500, Training Loss: 0.6927492711549015, Validation Loss: 0.6929406788447319Epoch 319/500, Training Loss: 0.6927492617553688, Validation Loss: 0.692940669665815Epoch 320/500, Training Loss: 0.6927492522983033, Validation Loss: 0.6929406604087761Epoch 321/500, Training Loss: 0.692749242783044, Validation Loss: 0.6929406510726747Epoch 322/500, Training Loss: 0.6927492332089243, Validation Loss: 0.6929406416565553Epoch 323/500, Training Loss: 0.6927492235752652, Validation Loss: 0.6929406321594487Epoch 324/500, Training Loss: 0.6927492138813786, Validation Loss: 0.692940622580372Epoch 325/500, Training Loss: 0.6927492041265699, Validation Loss: 0.6929406129183271Epoch 326/500, Training Loss: 0.6927491943101293, Validation Loss: 0.6929406031723013Epoch 327/500, Training Loss: 0.6927491844313407, Validation Loss: 0.6929405933412657Epoch 328/500, Training Loss: 0.6927491744894768, Validation Loss: 0.6929405834241777Epoch 329/500, Training Loss: 0.6927491644837999, Validation Loss: 0.6929405734199788Epoch 330/500, Training Loss: 0.6927491544135608, Validation Loss: 0.6929405633275929Epoch 331/500, Training Loss: 0.6927491442780005, Validation Loss: 0.692940553145929Epoch 332/500, Training Loss: 0.6927491340763482, Validation Loss: 0.6929405428738802Epoch 333/500, Training Loss: 0.6927491238078234, Validation Loss: 0.6929405325103211Epoch 334/500, Training Loss: 0.6927491134716318, Validation Loss: 0.6929405220541102Epoch 335/500, Training Loss: 0.6927491030669702, Validation Loss: 0.6929405115040886Epoch 336/500, Training Loss: 0.6927490925930211, Validation Loss: 0.6929405008590797Epoch 337/500, Training Loss: 0.6927490820489568, Validation Loss: 0.6929404901178878Epoch 338/500, Training Loss: 0.6927490714339353, Validation Loss: 0.6929404792793001Epoch 339/500, Training Loss: 0.6927490607471046, Validation Loss: 0.6929404683420844Epoch 340/500, Training Loss: 0.6927490499875996, Validation Loss: 0.6929404573049892Epoch 341/500, Training Loss: 0.6927490391545393, Validation Loss: 0.6929404461667438Epoch 342/500, Training Loss: 0.6927490282470329, Validation Loss: 0.6929404349260583Epoch 343/500, Training Loss: 0.6927490172641757, Validation Loss: 0.6929404235816212Epoch 344/500, Training Loss: 0.6927490062050474, Validation Loss: 0.6929404121321017Epoch 345/500, Training Loss: 0.6927489950687156, Validation Loss: 0.6929404005761481Epoch 346/500, Training Loss: 0.6927489838542329, Validation Loss: 0.6929403889123859Epoch 347/500, Training Loss: 0.692748972560638, Validation Loss: 0.6929403771394214Epoch 348/500, Training Loss: 0.6927489611869546, Validation Loss: 0.692940365255835Epoch 349/500, Training Loss: 0.6927489497321904, Validation Loss: 0.6929403532601887Epoch 350/500, Training Loss: 0.6927489381953406, Validation Loss: 0.6929403411510183Epoch 351/500, Training Loss: 0.6927489265753822, Validation Loss: 0.6929403289268381Epoch 352/500, Training Loss: 0.6927489148712767, Validation Loss: 0.6929403165861381Epoch 353/500, Training Loss: 0.6927489030819712, Validation Loss: 0.6929403041273827Epoch 354/500, Training Loss: 0.6927488912063936, Validation Loss: 0.6929402915490135Epoch 355/500, Training Loss: 0.6927488792434593, Validation Loss: 0.6929402788494452Epoch 356/500, Training Loss: 0.6927488671920615, Validation Loss: 0.6929402660270677Epoch 357/500, Training Loss: 0.6927488550510793, Validation Loss: 0.6929402530802439Epoch 358/500, Training Loss: 0.6927488428193721, Validation Loss: 0.6929402400073109Epoch 359/500, Training Loss: 0.6927488304957846, Validation Loss: 0.6929402268065772Epoch 360/500, Training Loss: 0.6927488180791391, Validation Loss: 0.692940213476325Epoch 361/500, Training Loss: 0.6927488055682403, Validation Loss: 0.692940200014806Epoch 362/500, Training Loss: 0.6927487929618761, Validation Loss: 0.6929401864202452Epoch 363/500, Training Loss: 0.69274878025881, Validation Loss: 0.6929401726908371Epoch 364/500, Training Loss: 0.6927487674577917, Validation Loss: 0.6929401588247457Epoch 365/500, Training Loss: 0.6927487545575449, Validation Loss: 0.6929401448201056Epoch 366/500, Training Loss: 0.6927487415567776, Validation Loss: 0.6929401306750183Epoch 367/500, Training Loss: 0.6927487284541718, Validation Loss: 0.692940116387554Epoch 368/500, Training Loss: 0.6927487152483911, Validation Loss: 0.692940101955752Epoch 369/500, Training Loss: 0.6927487019380776, Validation Loss: 0.6929400873776161Epoch 370/500, Training Loss: 0.6927486885218477, Validation Loss: 0.692940072651117Epoch 371/500, Training Loss: 0.6927486749982984, Validation Loss: 0.6929400577741913Epoch 372/500, Training Loss: 0.6927486613660012, Validation Loss: 0.6929400427447401Epoch 373/500, Training Loss: 0.692748647623506, Validation Loss: 0.6929400275606277Epoch 374/500, Training Loss: 0.6927486337693357, Validation Loss: 0.6929400122196829Epoch 375/500, Training Loss: 0.692748619801989, Validation Loss: 0.6929399967196965Epoch 376/500, Training Loss: 0.6927486057199419, Validation Loss: 0.6929399810584203Epoch 377/500, Training Loss: 0.6927485915216418, Validation Loss: 0.6929399652335685Epoch 378/500, Training Loss: 0.6927485772055102, Validation Loss: 0.6929399492428139Epoch 379/500, Training Loss: 0.6927485627699436, Validation Loss: 0.6929399330837901Epoch 380/500, Training Loss: 0.692748548213309, Validation Loss: 0.6929399167540883Epoch 381/500, Training Loss: 0.692748533533947, Validation Loss: 0.6929399002512573Epoch 382/500, Training Loss: 0.6927485187301674, Validation Loss: 0.6929398835728029Epoch 383/500, Training Loss: 0.6927485038002538, Validation Loss: 0.6929398667161866Epoch 384/500, Training Loss: 0.6927484887424575, Validation Loss: 0.6929398496788248Epoch 385/500, Training Loss: 0.692748473555, Validation Loss: 0.6929398324580881Epoch 386/500, Training Loss: 0.6927484582360741, Validation Loss: 0.6929398150512991Epoch 387/500, Training Loss: 0.6927484427838373, Validation Loss: 0.6929397974557341Epoch 388/500, Training Loss: 0.6927484271964173, Validation Loss: 0.6929397796686176Epoch 389/500, Training Loss: 0.6927484114719066, Validation Loss: 0.6929397616871273Epoch 390/500, Training Loss: 0.6927483956083671, Validation Loss: 0.6929397435083874Epoch 391/500, Training Loss: 0.6927483796038231, Validation Loss: 0.69293972512947Epoch 392/500, Training Loss: 0.6927483634562643, Validation Loss: 0.6929397065473949Epoch 393/500, Training Loss: 0.6927483471636472, Validation Loss: 0.6929396877591254Epoch 394/500, Training Loss: 0.6927483307238876, Validation Loss: 0.6929396687615716Epoch 395/500, Training Loss: 0.6927483141348656, Validation Loss: 0.6929396495515838Epoch 396/500, Training Loss: 0.6927482973944235, Validation Loss: 0.6929396301259565Epoch 397/500, Training Loss: 0.6927482805003631, Validation Loss: 0.6929396104814228Epoch 398/500, Training Loss: 0.6927482634504475, Validation Loss: 0.6929395906146559Epoch 399/500, Training Loss: 0.6927482462423965, Validation Loss: 0.692939570522267Epoch 400/500, Training Loss: 0.692748228873891, Validation Loss: 0.6929395502008029Epoch 401/500, Training Loss: 0.6927482113425664, Validation Loss: 0.6929395296467453Epoch 402/500, Training Loss: 0.692748193646017, Validation Loss: 0.692939508856511Epoch 403/500, Training Loss: 0.6927481757817903, Validation Loss: 0.6929394878264468Epoch 404/500, Training Loss: 0.6927481577473884, Validation Loss: 0.6929394665528319Epoch 405/500, Training Loss: 0.6927481395402678, Validation Loss: 0.6929394450318728Epoch 406/500, Training Loss: 0.6927481211578346, Validation Loss: 0.692939423259704Epoch 407/500, Training Loss: 0.692748102597449, Validation Loss: 0.6929394012323864Epoch 408/500, Training Loss: 0.69274808385642, Validation Loss: 0.6929393789459042Epoch 409/500, Training Loss: 0.6927480649320054, Validation Loss: 0.692939356396163Epoch 410/500, Training Loss: 0.6927480458214095, Validation Loss: 0.6929393335789905Epoch 411/500, Training Loss: 0.6927480265217849, Validation Loss: 0.692939310490132Epoch 412/500, Training Loss: 0.6927480070302285, Validation Loss: 0.6929392871252499Epoch 413/500, Training Loss: 0.6927479873437805, Validation Loss: 0.6929392634799207Epoch 414/500, Training Loss: 0.6927479674594254, Validation Loss: 0.6929392395496347Epoch 415/500, Training Loss: 0.6927479473740883, Validation Loss: 0.6929392153297927Epoch 416/500, Training Loss: 0.6927479270846334, Validation Loss: 0.6929391908157031Epoch 417/500, Training Loss: 0.6927479065878639, Validation Loss: 0.6929391660025825Epoch 418/500, Training Loss: 0.6927478858805208, Validation Loss: 0.6929391408855502Epoch 419/500, Training Loss: 0.6927478649592802, Validation Loss: 0.6929391154596292Epoch 420/500, Training Loss: 0.6927478438207515, Validation Loss: 0.6929390897197408Epoch 421/500, Training Loss: 0.6927478224614774, Validation Loss: 0.6929390636607036Epoch 422/500, Training Loss: 0.6927478008779303, Validation Loss: 0.6929390372772318Epoch 423/500, Training Loss: 0.6927477790665146, Validation Loss: 0.6929390105639315Epoch 424/500, Training Loss: 0.6927477570235585, Validation Loss: 0.6929389835152983Epoch 425/500, Training Loss: 0.692747734745318, Validation Loss: 0.6929389561257157Epoch 426/500, Training Loss: 0.6927477122279733, Validation Loss: 0.6929389283894495Epoch 427/500, Training Loss: 0.6927476894676253, Validation Loss: 0.6929389003006484Epoch 428/500, Training Loss: 0.6927476664602963, Validation Loss: 0.6929388718533401Epoch 429/500, Training Loss: 0.6927476432019248, Validation Loss: 0.6929388430414258Epoch 430/500, Training Loss: 0.6927476196883686, Validation Loss: 0.6929388138586814Epoch 431/500, Training Loss: 0.692747595915397, Validation Loss: 0.6929387842987502Epoch 432/500, Training Loss: 0.6927475718786911, Validation Loss: 0.692938754355142Epoch 433/500, Training Loss: 0.6927475475738424, Validation Loss: 0.6929387240212301Epoch 434/500, Training Loss: 0.692747522996352, Validation Loss: 0.692938693290246Epoch 435/500, Training Loss: 0.6927474981416225, Validation Loss: 0.6929386621552763Epoch 436/500, Training Loss: 0.6927474730049605, Validation Loss: 0.6929386306092611Epoch 437/500, Training Loss: 0.6927474475815737, Validation Loss: 0.6929385986449881Epoch 438/500, Training Loss: 0.6927474218665665, Validation Loss: 0.6929385662550883Epoch 439/500, Training Loss: 0.6927473958549385, Validation Loss: 0.6929385334320348Epoch 440/500, Training Loss: 0.6927473695415834, Validation Loss: 0.6929385001681359Epoch 441/500, Training Loss: 0.6927473429212803, Validation Loss: 0.6929384664555313Epoch 442/500, Training Loss: 0.6927473159887005, Validation Loss: 0.6929384322861896Epoch 443/500, Training Loss: 0.6927472887383932, Validation Loss: 0.6929383976519012Epoch 444/500, Training Loss: 0.6927472611647933, Validation Loss: 0.6929383625442769Epoch 445/500, Training Loss: 0.692747233262212, Validation Loss: 0.6929383269547394Epoch 446/500, Training Loss: 0.6927472050248322, Validation Loss: 0.6929382908745212Epoch 447/500, Training Loss: 0.6927471764467117, Validation Loss: 0.692938254294659Epoch 448/500, Training Loss: 0.6927471475217729, Validation Loss: 0.692938217205987Epoch 449/500, Training Loss: 0.692747118243802, Validation Loss: 0.6929381795991331Epoch 450/500, Training Loss: 0.6927470886064502, Validation Loss: 0.6929381414645134Epoch 451/500, Training Loss: 0.6927470586032201, Validation Loss: 0.6929381027923252Epoch 452/500, Training Loss: 0.692747028227471, Validation Loss: 0.6929380635725421Epoch 453/500, Training Loss: 0.6927469974724063, Validation Loss: 0.692938023794908Epoch 454/500, Training Loss: 0.6927469663310803, Validation Loss: 0.6929379834489309Epoch 455/500, Training Loss: 0.6927469347963832, Validation Loss: 0.6929379425238754Epoch 456/500, Training Loss: 0.6927469028610415, Validation Loss: 0.6929379010087567Epoch 457/500, Training Loss: 0.6927468705176149, Validation Loss: 0.692937858892335Epoch 458/500, Training Loss: 0.6927468377584898, Validation Loss: 0.6929378161631051Epoch 459/500, Training Loss: 0.6927468045758729, Validation Loss: 0.692937772809293Epoch 460/500, Training Loss: 0.6927467709617919, Validation Loss: 0.6929377288188469Epoch 461/500, Training Loss: 0.6927467369080813, Validation Loss: 0.6929376841794264Epoch 462/500, Training Loss: 0.6927467024063853, Validation Loss: 0.6929376388783998Epoch 463/500, Training Loss: 0.6927466674481495, Validation Loss: 0.6929375929028314Epoch 464/500, Training Loss: 0.6927466320246117, Validation Loss: 0.6929375462394753Epoch 465/500, Training Loss: 0.6927465961268047, Validation Loss: 0.6929374988747669Epoch 466/500, Training Loss: 0.6927465597455391, Validation Loss: 0.6929374507948097Epoch 467/500, Training Loss: 0.6927465228714078, Validation Loss: 0.6929374019853711Epoch 468/500, Training Loss: 0.6927464854947699, Validation Loss: 0.6929373524318704Epoch 469/500, Training Loss: 0.6927464476057541, Validation Loss: 0.6929373021193683Epoch 470/500, Training Loss: 0.6927464091942426, Validation Loss: 0.6929372510325552Epoch 471/500, Training Loss: 0.6927463702498675, Validation Loss: 0.6929371991557451Epoch 472/500, Training Loss: 0.6927463307620061, Validation Loss: 0.692937146472858Epoch 473/500, Training Loss: 0.6927462907197691, Validation Loss: 0.6929370929674141Epoch 474/500, Training Loss: 0.6927462501119981, Validation Loss: 0.6929370386225184Epoch 475/500, Training Loss: 0.6927462089272467, Validation Loss: 0.6929369834208499Epoch 476/500, Training Loss: 0.6927461671537875, Validation Loss: 0.6929369273446482Epoch 477/500, Training Loss: 0.6927461247795894, Validation Loss: 0.6929368703756997Epoch 478/500, Training Loss: 0.692746081792317, Validation Loss: 0.6929368124953266Epoch 479/500, Training Loss: 0.6927460381793165, Validation Loss: 0.692936753684368Epoch 480/500, Training Loss: 0.692745993927611, Validation Loss: 0.6929366939231709Epoch 481/500, Training Loss: 0.6927459490238841, Validation Loss: 0.6929366331915705Epoch 482/500, Training Loss: 0.6927459034544756, Validation Loss: 0.6929365714688773Epoch 483/500, Training Loss: 0.692745857205368, Validation Loss: 0.6929365087338588Epoch 484/500, Training Loss: 0.6927458102621732, Validation Loss: 0.6929364449647245Epoch 485/500, Training Loss: 0.6927457626101251, Validation Loss: 0.6929363801391064Epoch 486/500, Training Loss: 0.692745714234067, Validation Loss: 0.6929363142340441Epoch 487/500, Training Loss: 0.6927456651184332, Validation Loss: 0.6929362472259621Epoch 488/500, Training Loss: 0.6927456152472462, Validation Loss: 0.6929361790906532Epoch 489/500, Training Loss: 0.6927455646040961, Validation Loss: 0.692936109803259Epoch 490/500, Training Loss: 0.6927455131721275, Validation Loss: 0.6929360393382444Epoch 491/500, Training Loss: 0.6927454609340272, Validation Loss: 0.6929359676693821Epoch 492/500, Training Loss: 0.6927454078720094, Validation Loss: 0.6929358947697267Epoch 493/500, Training Loss: 0.6927453539677992, Validation Loss: 0.6929358206115905Epoch 494/500, Training Loss: 0.6927452992026154, Validation Loss: 0.6929357451665228Epoch 495/500, Training Loss: 0.6927452435571579, Validation Loss: 0.6929356684052814Epoch 496/500, Training Loss: 0.692745187011585, Validation Loss: 0.6929355902978088Epoch 497/500, Training Loss: 0.6927451295455007, Validation Loss: 0.6929355108132045Epoch 498/500, Training Loss: 0.6927450711379319, Validation Loss: 0.6929354299196964Epoch 499/500, Training Loss: 0.6927450117673126, Validation Loss: 0.6929353475846134Epoch 500/500, Training Loss: 0.6927449514114608, Validation Loss: 0.6929352637743532
The above model shows the result and plot with learning rate = 0.01, max_epoch = 500.
# find the best hyperparameters
# define a set of learning rates and epochs to explore
learning_rates = [0.01, 0.05, 0.1, 0.15]
epochs = [100, 300, 500, 800]
# initialize a dictionary to store validation losses for each combination
validation_losses = {}
# Loop through each combination of learning rate and epoch
for lr in learning_rates:
for epoch in epochs:
# initialize the neural network with the current learning rate
nn = myNeuralNetwork(n_in=2, n_layer1=5, n_layer2=5, n_out=1, learning_rate=lr)
# train the model and collect training and validation losses
training_loss, validation_loss = nn.fit(
X_train,
y_train,
max_epochs=epoch,
learning_rate=lr,
get_validation_loss=True,
X_val=X_val,
y_val=y_val,
)
# store the final validation loss for the current combination
validation_losses[(lr, epoch)] = validation_loss[-1]
# find the combination of learning rate and epoch with the lowest validation loss
best_lr, best_epoch = min(validation_losses, key=validation_losses.get)
best_loss = validation_losses[(best_lr, best_epoch)]
# retrain the model with the best combination of learning rate and epoch
nn_best = myNeuralNetwork(
n_in=2, n_layer1=5, n_layer2=5, n_out=1, learning_rate=best_lr
)
training_loss_best, validation_loss_best = nn_best.fit(
X_train,
y_train,
max_epochs=best_epoch,
learning_rate=best_lr,
get_validation_loss=True,
X_val=X_val,
y_val=y_val,
)
# plotting the cost function for the best hyperparameters
plt.plot(training_loss_best, label="Training Loss")
plt.plot(validation_loss_best, label="Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.title("Training and Validation Loss Curves (Best Hyperparameters)")
plt.show()
Epoch 1/100, Training Loss: 0.6936582183242088, Validation Loss: 0.6931640713978251Epoch 2/100, Training Loss: 0.6933426431012596, Validation Loss: 0.693001808598002Epoch 3/100, Training Loss: 0.6931347530114067, Validation Loss: 0.6929097228435497Epoch 4/100, Training Loss: 0.6929982129669934, Validation Loss: 0.6928614867418166Epoch 5/100, Training Loss: 0.6929088704056783, Validation Loss: 0.6928400841803694Epoch 6/100, Training Loss: 0.6928506897549849, Validation Loss: 0.6928346107832863Epoch 7/100, Training Loss: 0.6928130361873072, Validation Loss: 0.6928381557484047Epoch 8/100, Training Loss: 0.6927888649844369, Validation Loss: 0.6928464042693057Epoch 9/100, Training Loss: 0.6927735161671518, Validation Loss: 0.6928567182559158Epoch 10/100, Training Loss: 0.6927639126825851, Validation Loss: 0.6928675336547426Epoch 11/100, Training Loss: 0.6927580273077637, Validation Loss: 0.6928779668638909Epoch 12/100, Training Loss: 0.6927545283596992, Validation Loss: 0.6928875589440253Epoch 13/100, Training Loss: 0.6927525443435414, Validation Loss: 0.6928961104210869Epoch 14/100, Training Loss: 0.6927515076995867, Validation Loss: 0.6929035754741987Epoch 15/100, Training Loss: 0.6927510511447863, Validation Loss: 0.6929099949085826Epoch 16/100, Training Loss: 0.6927509389768765, Validation Loss: 0.6929154543382533Epoch 17/100, Training Loss: 0.6927510216114549, Validation Loss: 0.6929200586519909Epoch 18/100, Training Loss: 0.6927512055489553, Validation Loss: 0.692923916909415Epoch 19/100, Training Loss: 0.692751433581148, Validation Loss: 0.6929271338433168Epoch 20/100, Training Loss: 0.6927516717854428, Validation Loss: 0.6929298054821997Epoch 21/100, Training Loss: 0.6927519010124223, Validation Loss: 0.6929320172870448Epoch 22/100, Training Loss: 0.692752111342174, Validation Loss: 0.6929338437735666Epoch 23/100, Training Loss: 0.6927522984974872, Validation Loss: 0.6929353489684831Epoch 24/100, Training Loss: 0.6927524615429388, Validation Loss: 0.6929365872936196Epoch 25/100, Training Loss: 0.692752601425614, Validation Loss: 0.6929376046301676Epoch 26/100, Training Loss: 0.6927527200639149, Validation Loss: 0.692938439416934Epoch 27/100, Training Loss: 0.6927528197909301, Validation Loss: 0.6929391237006912Epoch 28/100, Training Loss: 0.6927529030251941, Validation Loss: 0.6929396840967994Epoch 29/100, Training Loss: 0.6927529720856198, Validation Loss: 0.6929401426426824Epoch 30/100, Training Loss: 0.6927530290963957, Validation Loss: 0.692940517541128Epoch 31/100, Training Loss: 0.692753075946791, Validation Loss: 0.692940823798409Epoch 32/100, Training Loss: 0.6927531142834098, Validation Loss: 0.6929410737663446Epoch 33/100, Training Loss: 0.6927531455206353, Validation Loss: 0.6929412775990709Epoch 34/100, Training Loss: 0.6927531708603821, Validation Loss: 0.6929414436355763Epoch 35/100, Training Loss: 0.6927531913157474, Validation Loss: 0.6929415787185368Epoch 36/100, Training Loss: 0.692753207735352, Validation Loss: 0.692941688459053Epoch 37/100, Training Loss: 0.6927532208265975, Validation Loss: 0.692941777455811Epoch 38/100, Training Loss: 0.6927532311769184, Validation Loss: 0.6929418494760834Epoch 39/100, Training Loss: 0.6927532392726438, Validation Loss: 0.6929419076049287Epoch 40/100, Training Loss: 0.6927532455154328, Validation Loss: 0.6929419543679876Epoch 41/100, Training Loss: 0.6927532502363715, Validation Loss: 0.6929419918324228Epoch 42/100, Training Loss: 0.6927532537079479, Validation Loss: 0.6929420216898123Epoch 43/100, Training Loss: 0.6927532561541379, Validation Loss: 0.6929420453241624Epoch 44/100, Training Loss: 0.6927532577588555, Validation Loss: 0.6929420638676801Epoch 45/100, Training Loss: 0.6927532586729914, Validation Loss: 0.6929420782464795Epoch 46/100, Training Loss: 0.6927532590202593, Validation Loss: 0.6929420892180262Epoch 47/100, Training Loss: 0.6927532589020318, Validation Loss: 0.6929420974017986Epoch 48/100, Training Loss: 0.692753258401338, Validation Loss: 0.6929421033043879Epoch 49/100, Training Loss: 0.6927532575861544, Validation Loss: 0.6929421073400468Epoch 50/100, Training Loss: 0.6927532565121143, Validation Loss: 0.6929421098474984Epoch 51/100, Training Loss: 0.6927532552247426, Validation Loss: 0.6929421111036955Epoch 52/100, Training Loss: 0.6927532537612737, Validation Loss: 0.6929421113350696Epoch 53/100, Training Loss: 0.6927532521521595, Validation Loss: 0.6929421107267395Epoch 54/100, Training Loss: 0.6927532504222991, Validation Loss: 0.6929421094300393Epoch 55/100, Training Loss: 0.6927532485920463, Validation Loss: 0.6929421075686747Epoch 56/100, Training Loss: 0.6927532466780335, Validation Loss: 0.6929421052437612Epoch 57/100, Training Loss: 0.6927532446938539, Validation Loss: 0.6929421025379411Epoch 58/100, Training Loss: 0.6927532426506056, Validation Loss: 0.6929420995187516Epoch 59/100, Training Loss: 0.6927532405573568, Validation Loss: 0.6929420962413793Epoch 60/100, Training Loss: 0.6927532384215056, Validation Loss: 0.692942092750912Epoch 61/100, Training Loss: 0.69275323624909, Validation Loss: 0.6929420890841784Epoch 62/100, Training Loss: 0.6927532340450343, Validation Loss: 0.6929420852712564Epoch 63/100, Training Loss: 0.6927532318133499, Validation Loss: 0.6929420813367024Epoch 64/100, Training Loss: 0.6927532295573048, Validation Loss: 0.6929420773005582Epoch 65/100, Training Loss: 0.6927532272795573, Validation Loss: 0.6929420731791751Epoch 66/100, Training Loss: 0.6927532249822661, Validation Loss: 0.6929420689858851Epoch 67/100, Training Loss: 0.6927532226671848, Validation Loss: 0.6929420647315535Epoch 68/100, Training Loss: 0.6927532203357332, Validation Loss: 0.692942060425026Epoch 69/100, Training Loss: 0.6927532179890552, Validation Loss: 0.6929420560735002Epoch 70/100, Training Loss: 0.6927532156280756, Validation Loss: 0.6929420516828211Epoch 71/100, Training Loss: 0.6927532132535353, Validation Loss: 0.6929420472577327Epoch 72/100, Training Loss: 0.6927532108660264, Validation Loss: 0.6929420428020758Epoch 73/100, Training Loss: 0.6927532084660173, Validation Loss: 0.6929420383189531Epoch 74/100, Training Loss: 0.6927532060538788, Validation Loss: 0.6929420338108631Epoch 75/100, Training Loss: 0.6927532036298978, Validation Loss: 0.6929420292798105Epoch 76/100, Training Loss: 0.6927532011942944, Validation Loss: 0.6929420247273966Epoch 77/100, Training Loss: 0.6927531987472354, Validation Loss: 0.6929420201548927Epoch 78/100, Training Loss: 0.6927531962888405, Validation Loss: 0.6929420155632983Epoch 79/100, Training Loss: 0.6927531938191944, Validation Loss: 0.6929420109533914Epoch 80/100, Training Loss: 0.692753191338351, Validation Loss: 0.6929420063257703Epoch 81/100, Training Loss: 0.6927531888463375, Validation Loss: 0.6929420016808844Epoch 82/100, Training Loss: 0.6927531863431625, Validation Loss: 0.6929419970190602Epoch 83/100, Training Loss: 0.6927531838288177, Validation Loss: 0.6929419923405266Epoch 84/100, Training Loss: 0.6927531813032796, Validation Loss: 0.6929419876454292Epoch 85/100, Training Loss: 0.6927531787665144, Validation Loss: 0.6929419829338471Epoch 86/100, Training Loss: 0.6927531762184778, Validation Loss: 0.6929419782058045Epoch 87/100, Training Loss: 0.6927531736591187, Validation Loss: 0.6929419734612804Epoch 88/100, Training Loss: 0.6927531710883793, Validation Loss: 0.6929419687002158Epoch 89/100, Training Loss: 0.692753168506194, Validation Loss: 0.6929419639225213Epoch 90/100, Training Loss: 0.6927531659124946, Validation Loss: 0.6929419591280823Epoch 91/100, Training Loss: 0.6927531633072095, Validation Loss: 0.6929419543167634Epoch 92/100, Training Loss: 0.6927531606902615, Validation Loss: 0.6929419494884107Epoch 93/100, Training Loss: 0.6927531580615721, Validation Loss: 0.692941944642856Epoch 94/100, Training Loss: 0.6927531554210612, Validation Loss: 0.6929419397799204Epoch 95/100, Training Loss: 0.6927531527686419, Validation Loss: 0.6929419348994121Epoch 96/100, Training Loss: 0.6927531501042312, Validation Loss: 0.6929419300011327Epoch 97/100, Training Loss: 0.6927531474277405, Validation Loss: 0.6929419250848754Epoch 98/100, Training Loss: 0.6927531447390817, Validation Loss: 0.6929419201504273Epoch 99/100, Training Loss: 0.6927531420381633, Validation Loss: 0.6929419151975698Epoch 100/100, Training Loss: 0.6927531393248942, Validation Loss: 0.6929419102260803Epoch 1/300, Training Loss: 0.6931493110186535, Validation Loss: 0.6929169652559233Epoch 2/300, Training Loss: 0.6930078491019603, Validation Loss: 0.6928653599279717Epoch 3/300, Training Loss: 0.6929149194368849, Validation Loss: 0.6928416876000233Epoch 4/300, Training Loss: 0.6928541583019574, Validation Loss: 0.6928347420960395Epoch 5/300, Training Loss: 0.6928146694394319, Validation Loss: 0.6928373806821679Epoch 6/300, Training Loss: 0.6927892067261726, Validation Loss: 0.6928451152268609Epoch 7/300, Training Loss: 0.6927729585817037, Validation Loss: 0.6928551809029826Epoch 8/300, Training Loss: 0.692762735772493, Validation Loss: 0.6928659226871935Epoch 9/300, Training Loss: 0.6927564291563716, Validation Loss: 0.6928763927958304Epoch 10/300, Training Loss: 0.6927526478692717, Validation Loss: 0.6928860877921569Epoch 11/300, Training Loss: 0.6927504780515878, Validation Loss: 0.6928947779423947Epoch 12/300, Training Loss: 0.6927493220634273, Validation Loss: 0.6929023973197244Epoch 13/300, Training Loss: 0.6927487914265225, Validation Loss: 0.6929089737669306Epoch 14/300, Training Loss: 0.6927486356156703, Validation Loss: 0.6929145848908338Epoch 15/300, Training Loss: 0.692748694759991, Validation Loss: 0.6929193309566068Epoch 16/300, Training Loss: 0.6927488682811224, Validation Loss: 0.6929233186675766Epoch 17/300, Training Loss: 0.6927490941454608, Validation Loss: 0.6929266518834047Epoch 18/300, Training Loss: 0.6927493351778145, Validation Loss: 0.6929294266981515Epoch 19/300, Training Loss: 0.6927495700664544, Validation Loss: 0.692931729203875Epoch 20/300, Training Loss: 0.6927497874793808, Validation Loss: 0.6929336348610664Epoch 21/300, Training Loss: 0.692749982239194, Validation Loss: 0.6929352087882799Epoch 22/300, Training Loss: 0.69275015285609, Validation Loss: 0.6929365065388084Epoch 23/300, Training Loss: 0.6927502999535459, Validation Loss: 0.6929375750982332Epoch 24/300, Training Loss: 0.6927504252779629, Validation Loss: 0.6929384539436034Epoch 25/300, Training Loss: 0.692750531087964, Validation Loss: 0.6929391760731829Epoch 26/300, Training Loss: 0.6927506197885639, Validation Loss: 0.6929397689585443Epoch 27/300, Training Loss: 0.6927506937215935, Validation Loss: 0.6929402553971908Epoch 28/300, Training Loss: 0.6927507550544066, Validation Loss: 0.6929406542596636Epoch 29/300, Training Loss: 0.6927508057292017, Validation Loss: 0.6929409811341257Epoch 30/300, Training Loss: 0.6927508474486321, Validation Loss: 0.6929412488761758Epoch 31/300, Training Loss: 0.6927508816822058, Validation Loss: 0.6929414680738153Epoch 32/300, Training Loss: 0.6927509096837128, Validation Loss: 0.6929416474380827Epoch 33/300, Training Loss: 0.6927509325136679, Validation Loss: 0.6929417941295501Epoch 34/300, Training Loss: 0.6927509510631686, Validation Loss: 0.6929419140301301Epoch 35/300, Training Loss: 0.69275096607711, Validation Loss: 0.6929420119686236Epoch 36/300, Training Loss: 0.6927509781756468, Validation Loss: 0.6929420919074121Epoch 37/300, Training Loss: 0.6927509878734341, Validation Loss: 0.6929421570966723Epoch 38/300, Training Loss: 0.6927509955964763, Validation Loss: 0.6929422102015554Epoch 39/300, Training Loss: 0.6927510016966785, Validation Loss: 0.6929422534069324Epoch 40/300, Training Loss: 0.6927510064642513, Validation Loss: 0.6929422885035661Epoch 41/300, Training Loss: 0.692751010138209, Validation Loss: 0.6929423169589497Epoch 42/300, Training Loss: 0.6927510129151851, Validation Loss: 0.6929423399754934Epoch 43/300, Training Loss: 0.6927510149568057, Validation Loss: 0.692942358538293Epoch 44/300, Training Loss: 0.6927510163958228, Validation Loss: 0.6929423734543267Epoch 45/300, Training Loss: 0.6927510173412097, Validation Loss: 0.6929423853846086Epoch 46/300, Training Loss: 0.6927510178823744, Validation Loss: 0.6929423948705525Epoch 47/300, Training Loss: 0.6927510180926426, Validation Loss: 0.6929424023555868Epoch 48/300, Training Loss: 0.6927510180321279, Validation Loss: 0.6929424082028686Epoch 49/300, Training Loss: 0.6927510177500993, Validation Loss: 0.6929424127098025Epoch 50/300, Training Loss: 0.6927510172869241, Validation Loss: 0.6929424161199355Epoch 51/300, Training Loss: 0.6927510166756712, Validation Loss: 0.6929424186327016Epoch 52/300, Training Loss: 0.692751015943423, Validation Loss: 0.6929424204114066Epoch 53/300, Training Loss: 0.6927510151123519, Validation Loss: 0.6929424215897664Epoch 54/300, Training Loss: 0.6927510142006097, Validation Loss: 0.692942422277267Epoch 55/300, Training Loss: 0.6927510132230512, Validation Loss: 0.692942422563546Epoch 56/300, Training Loss: 0.6927510121918308, Validation Loss: 0.6929424225219918Epoch 57/300, Training Loss: 0.6927510111168876, Validation Loss: 0.6929424222126829Epoch 58/300, Training Loss: 0.6927510100063514, Validation Loss: 0.6929424216847995Epoch 59/300, Training Loss: 0.6927510088668634, Validation Loss: 0.6929424209785966Epoch 60/300, Training Loss: 0.6927510077038506, Validation Loss: 0.6929424201270205Epoch 61/300, Training Loss: 0.6927510065217418, Validation Loss: 0.6929424191570329Epoch 62/300, Training Loss: 0.6927510053241526, Validation Loss: 0.6929424180906943Epoch 63/300, Training Loss: 0.6927510041140275, Validation Loss: 0.6929424169460526Epoch 64/300, Training Loss: 0.6927510028937636, Validation Loss: 0.6929424157378683Epoch 65/300, Training Loss: 0.692751001665312, Validation Loss: 0.6929424144782113Epoch 66/300, Training Loss: 0.6927510004302544, Validation Loss: 0.6929424131769473Epoch 67/300, Training Loss: 0.6927509991898727, Validation Loss: 0.6929424118421387Epoch 68/300, Training Loss: 0.6927509979452029, Validation Loss: 0.6929424104803689Epoch 69/300, Training Loss: 0.6927509966970772, Validation Loss: 0.6929424090970119Epoch 70/300, Training Loss: 0.6927509954461648, Validation Loss: 0.6929424076964505Epoch 71/300, Training Loss: 0.6927509941929988, Validation Loss: 0.6929424062822561Epoch 72/300, Training Loss: 0.6927509929380015, Validation Loss: 0.6929424048573363Epoch 73/300, Training Loss: 0.6927509916815034, Validation Loss: 0.6929424034240537Epoch 74/300, Training Loss: 0.692750990423763, Validation Loss: 0.692942401984326Epoch 75/300, Training Loss: 0.6927509891649736, Validation Loss: 0.6929424005397051Epoch 76/300, Training Loss: 0.6927509879052841, Validation Loss: 0.6929423990914448Epoch 77/300, Training Loss: 0.692750986644797, Validation Loss: 0.6929423976405553Epoch 78/300, Training Loss: 0.692750985383584, Validation Loss: 0.6929423961878435Epoch 79/300, Training Loss: 0.6927509841216885, Validation Loss: 0.6929423947339548Epoch 80/300, Training Loss: 0.6927509828591332, Validation Loss: 0.6929423932793985Epoch 81/300, Training Loss: 0.6927509815959174, Validation Loss: 0.6929423918245748Epoch 82/300, Training Loss: 0.6927509803320306, Validation Loss: 0.6929423903697939Epoch 83/300, Training Loss: 0.6927509790674463, Validation Loss: 0.6929423889152897Epoch 84/300, Training Loss: 0.6927509778021286, Validation Loss: 0.6929423874612388Epoch 85/300, Training Loss: 0.6927509765360323, Validation Loss: 0.6929423860077665Epoch 86/300, Training Loss: 0.6927509752691071, Validation Loss: 0.6929423845549573Epoch 87/300, Training Loss: 0.692750974001295, Validation Loss: 0.6929423831028624Epoch 88/300, Training Loss: 0.692750972732534, Validation Loss: 0.6929423816515057Epoch 89/300, Training Loss: 0.692750971462755, Validation Loss: 0.6929423802008895Epoch 90/300, Training Loss: 0.6927509701918917, Validation Loss: 0.692942378750996Epoch 91/300, Training Loss: 0.6927509689198695, Validation Loss: 0.6929423773017932Epoch 92/300, Training Loss: 0.6927509676466138, Validation Loss: 0.6929423758532365Epoch 93/300, Training Loss: 0.692750966372046, Validation Loss: 0.6929423744052712Epoch 94/300, Training Loss: 0.6927509650960888, Validation Loss: 0.6929423729578329Epoch 95/300, Training Loss: 0.69275096381866, Validation Loss: 0.6929423715108521Epoch 96/300, Training Loss: 0.6927509625396786, Validation Loss: 0.6929423700642525Epoch 97/300, Training Loss: 0.6927509612590609, Validation Loss: 0.6929423686179521Epoch 98/300, Training Loss: 0.6927509599767234, Validation Loss: 0.6929423671718662Epoch 99/300, Training Loss: 0.6927509586925791, Validation Loss: 0.6929423657259066Epoch 100/300, Training Loss: 0.692750957406542, Validation Loss: 0.6929423642799806Epoch 101/300, Training Loss: 0.6927509561185252, Validation Loss: 0.6929423628339949Epoch 102/300, Training Loss: 0.6927509548284426, Validation Loss: 0.6929423613878533Epoch 103/300, Training Loss: 0.6927509535362034, Validation Loss: 0.6929423599414585Epoch 104/300, Training Loss: 0.6927509522417188, Validation Loss: 0.6929423584947103Epoch 105/300, Training Loss: 0.6927509509448984, Validation Loss: 0.6929423570475084Epoch 106/300, Training Loss: 0.6927509496456505, Validation Loss: 0.6929423555997503Epoch 107/300, Training Loss: 0.6927509483438853, Validation Loss: 0.692942354151333Epoch 108/300, Training Loss: 0.6927509470395093, Validation Loss: 0.6929423527021525Epoch 109/300, Training Loss: 0.6927509457324291, Validation Loss: 0.6929423512521031Epoch 110/300, Training Loss: 0.6927509444225536, Validation Loss: 0.6929423498010794Epoch 111/300, Training Loss: 0.6927509431097846, Validation Loss: 0.6929423483489742Epoch 112/300, Training Loss: 0.6927509417940292, Validation Loss: 0.6929423468956795Epoch 113/300, Training Loss: 0.6927509404751908, Validation Loss: 0.6929423454410872Epoch 114/300, Training Loss: 0.6927509391531739, Validation Loss: 0.6929423439850881Epoch 115/300, Training Loss: 0.6927509378278798, Validation Loss: 0.6929423425275719Epoch 116/300, Training Loss: 0.6927509364992114, Validation Loss: 0.6929423410684279Epoch 117/300, Training Loss: 0.6927509351670701, Validation Loss: 0.6929423396075439Epoch 118/300, Training Loss: 0.6927509338313548, Validation Loss: 0.6929423381448083Epoch 119/300, Training Loss: 0.6927509324919667, Validation Loss: 0.692942336680108Epoch 120/300, Training Loss: 0.692750931148804, Validation Loss: 0.6929423352133286Epoch 121/300, Training Loss: 0.6927509298017641, Validation Loss: 0.6929423337443559Epoch 122/300, Training Loss: 0.6927509284507449, Validation Loss: 0.6929423322730741Epoch 123/300, Training Loss: 0.6927509270956431, Validation Loss: 0.6929423307993671Epoch 124/300, Training Loss: 0.692750925736353, Validation Loss: 0.6929423293231178Epoch 125/300, Training Loss: 0.69275092437277, Validation Loss: 0.692942327844208Epoch 126/300, Training Loss: 0.6927509230047872, Validation Loss: 0.692942326362519Epoch 127/300, Training Loss: 0.6927509216322979, Validation Loss: 0.6929423248779306Epoch 128/300, Training Loss: 0.6927509202551932, Validation Loss: 0.6929423233903236Epoch 129/300, Training Loss: 0.6927509188733658, Validation Loss: 0.6929423218995753Epoch 130/300, Training Loss: 0.692750917486703, Validation Loss: 0.6929423204055636Epoch 131/300, Training Loss: 0.6927509160950948, Validation Loss: 0.6929423189081652Epoch 132/300, Training Loss: 0.6927509146984289, Validation Loss: 0.6929423174072561Epoch 133/300, Training Loss: 0.6927509132965932, Validation Loss: 0.6929423159027099Epoch 134/300, Training Loss: 0.6927509118894728, Validation Loss: 0.6929423143944008Epoch 135/300, Training Loss: 0.6927509104769523, Validation Loss: 0.6929423128822018Epoch 136/300, Training Loss: 0.6927509090589152, Validation Loss: 0.6929423113659839Epoch 137/300, Training Loss: 0.6927509076352444, Validation Loss: 0.6929423098456178Epoch 138/300, Training Loss: 0.6927509062058214, Validation Loss: 0.6929423083209729Epoch 139/300, Training Loss: 0.6927509047705269, Validation Loss: 0.6929423067919174Epoch 140/300, Training Loss: 0.6927509033292377, Validation Loss: 0.6929423052583185Epoch 141/300, Training Loss: 0.6927509018818359, Validation Loss: 0.6929423037200418Epoch 142/300, Training Loss: 0.6927509004281941, Validation Loss: 0.6929423021769521Epoch 143/300, Training Loss: 0.6927508989681901, Validation Loss: 0.6929423006289135Epoch 144/300, Training Loss: 0.6927508975016978, Validation Loss: 0.6929422990757873Epoch 145/300, Training Loss: 0.69275089602859, Validation Loss: 0.6929422975174351Epoch 146/300, Training Loss: 0.6927508945487382, Validation Loss: 0.6929422959537166Epoch 147/300, Training Loss: 0.6927508930620128, Validation Loss: 0.6929422943844903Epoch 148/300, Training Loss: 0.6927508915682835, Validation Loss: 0.6929422928096124Epoch 149/300, Training Loss: 0.6927508900674171, Validation Loss: 0.6929422912289396Epoch 150/300, Training Loss: 0.6927508885592799, Validation Loss: 0.6929422896423258Epoch 151/300, Training Loss: 0.6927508870437367, Validation Loss: 0.692942288049623Epoch 152/300, Training Loss: 0.6927508855206518, Validation Loss: 0.6929422864506841Epoch 153/300, Training Loss: 0.6927508839898849, Validation Loss: 0.6929422848453575Epoch 154/300, Training Loss: 0.6927508824512993, Validation Loss: 0.6929422832334925Epoch 155/300, Training Loss: 0.6927508809047516, Validation Loss: 0.6929422816149358Epoch 156/300, Training Loss: 0.6927508793500996, Validation Loss: 0.6929422799895317Epoch 157/300, Training Loss: 0.6927508777872007, Validation Loss: 0.6929422783571256Epoch 158/300, Training Loss: 0.6927508762159067, Validation Loss: 0.6929422767175577Epoch 159/300, Training Loss: 0.6927508746360711, Validation Loss: 0.6929422750706692Epoch 160/300, Training Loss: 0.692750873047545, Validation Loss: 0.692942273416299Epoch 161/300, Training Loss: 0.6927508714501772, Validation Loss: 0.6929422717542836Epoch 162/300, Training Loss: 0.6927508698438147, Validation Loss: 0.6929422700844581Epoch 163/300, Training Loss: 0.6927508682283041, Validation Loss: 0.6929422684066561Epoch 164/300, Training Loss: 0.6927508666034892, Validation Loss: 0.6929422667207084Epoch 165/300, Training Loss: 0.6927508649692118, Validation Loss: 0.6929422650264454Epoch 166/300, Training Loss: 0.6927508633253115, Validation Loss: 0.692942263323695Epoch 167/300, Training Loss: 0.6927508616716274, Validation Loss: 0.6929422616122823Epoch 168/300, Training Loss: 0.6927508600079959, Validation Loss: 0.6929422598920312Epoch 169/300, Training Loss: 0.6927508583342517, Validation Loss: 0.692942258162764Epoch 170/300, Training Loss: 0.6927508566502258, Validation Loss: 0.6929422564243001Epoch 171/300, Training Loss: 0.6927508549557508, Validation Loss: 0.6929422546764574Epoch 172/300, Training Loss: 0.692750853250654, Validation Loss: 0.6929422529190516Epoch 173/300, Training Loss: 0.6927508515347627, Validation Loss: 0.6929422511518953Epoch 174/300, Training Loss: 0.6927508498079, Validation Loss: 0.6929422493748005Epoch 175/300, Training Loss: 0.6927508480698877, Validation Loss: 0.6929422475875757Epoch 176/300, Training Loss: 0.6927508463205474, Validation Loss: 0.6929422457900279Epoch 177/300, Training Loss: 0.6927508445596953, Validation Loss: 0.6929422439819606Epoch 178/300, Training Loss: 0.692750842787148, Validation Loss: 0.6929422421631769Epoch 179/300, Training Loss: 0.6927508410027184, Validation Loss: 0.692942240333476Epoch 180/300, Training Loss: 0.6927508392062156, Validation Loss: 0.6929422384926538Epoch 181/300, Training Loss: 0.6927508373974502, Validation Loss: 0.6929422366405067Epoch 182/300, Training Loss: 0.6927508355762269, Validation Loss: 0.6929422347768253Epoch 183/300, Training Loss: 0.692750833742349, Validation Loss: 0.6929422329013992Epoch 184/300, Training Loss: 0.6927508318956191, Validation Loss: 0.6929422310140159Epoch 185/300, Training Loss: 0.6927508300358324, Validation Loss: 0.6929422291144589Epoch 186/300, Training Loss: 0.6927508281627888, Validation Loss: 0.6929422272025094Epoch 187/300, Training Loss: 0.6927508262762776, Validation Loss: 0.6929422252779462Epoch 188/300, Training Loss: 0.6927508243760909, Validation Loss: 0.6929422233405449Epoch 189/300, Training Loss: 0.6927508224620167, Validation Loss: 0.6929422213900777Epoch 190/300, Training Loss: 0.6927508205338397, Validation Loss: 0.6929422194263152Epoch 191/300, Training Loss: 0.6927508185913415, Validation Loss: 0.6929422174490237Epoch 192/300, Training Loss: 0.6927508166343019, Validation Loss: 0.6929422154579671Epoch 193/300, Training Loss: 0.692750814662497, Validation Loss: 0.692942213452906Epoch 194/300, Training Loss: 0.6927508126756984, Validation Loss: 0.6929422114335971Epoch 195/300, Training Loss: 0.6927508106736783, Validation Loss: 0.6929422093997957Epoch 196/300, Training Loss: 0.6927508086562036, Validation Loss: 0.6929422073512521Epoch 197/300, Training Loss: 0.6927508066230368, Validation Loss: 0.6929422052877139Epoch 198/300, Training Loss: 0.692750804573939, Validation Loss: 0.6929422032089254Epoch 199/300, Training Loss: 0.6927508025086694, Validation Loss: 0.6929422011146265Epoch 200/300, Training Loss: 0.6927508004269807, Validation Loss: 0.6929421990045551Epoch 201/300, Training Loss: 0.6927507983286232, Validation Loss: 0.692942196878444Epoch 202/300, Training Loss: 0.6927507962133457, Validation Loss: 0.692942194736023Epoch 203/300, Training Loss: 0.6927507940808894, Validation Loss: 0.6929421925770185Epoch 204/300, Training Loss: 0.6927507919309983, Validation Loss: 0.6929421904011527Epoch 205/300, Training Loss: 0.6927507897634054, Validation Loss: 0.6929421882081432Epoch 206/300, Training Loss: 0.6927507875778457, Validation Loss: 0.692942185997705Epoch 207/300, Training Loss: 0.6927507853740476, Validation Loss: 0.6929421837695482Epoch 208/300, Training Loss: 0.6927507831517368, Validation Loss: 0.6929421815233793Epoch 209/300, Training Loss: 0.6927507809106348, Validation Loss: 0.6929421792588992Epoch 210/300, Training Loss: 0.6927507786504583, Validation Loss: 0.6929421769758065Epoch 211/300, Training Loss: 0.6927507763709209, Validation Loss: 0.6929421746737946Epoch 212/300, Training Loss: 0.6927507740717326, Validation Loss: 0.6929421723525524Epoch 213/300, Training Loss: 0.6927507717525978, Validation Loss: 0.6929421700117642Epoch 214/300, Training Loss: 0.6927507694132187, Validation Loss: 0.6929421676511099Epoch 215/300, Training Loss: 0.6927507670532905, Validation Loss: 0.6929421652702648Epoch 216/300, Training Loss: 0.6927507646725053, Validation Loss: 0.6929421628688985Epoch 217/300, Training Loss: 0.6927507622705522, Validation Loss: 0.6929421604466776Epoch 218/300, Training Loss: 0.6927507598471129, Validation Loss: 0.6929421580032618Epoch 219/300, Training Loss: 0.6927507574018661, Validation Loss: 0.6929421555383077Epoch 220/300, Training Loss: 0.6927507549344863, Validation Loss: 0.6929421530514644Epoch 221/300, Training Loss: 0.6927507524446408, Validation Loss: 0.6929421505423784Epoch 222/300, Training Loss: 0.6927507499319948, Validation Loss: 0.6929421480106884Epoch 223/300, Training Loss: 0.6927507473962076, Validation Loss: 0.6929421454560293Epoch 224/300, Training Loss: 0.692750744836933, Validation Loss: 0.6929421428780301Epoch 225/300, Training Loss: 0.6927507422538179, Validation Loss: 0.6929421402763137Epoch 226/300, Training Loss: 0.6927507396465087, Validation Loss: 0.6929421376504975Epoch 227/300, Training Loss: 0.6927507370146395, Validation Loss: 0.6929421350001935Epoch 228/300, Training Loss: 0.6927507343578468, Validation Loss: 0.6929421323250076Epoch 229/300, Training Loss: 0.6927507316757548, Validation Loss: 0.6929421296245379Epoch 230/300, Training Loss: 0.6927507289679864, Validation Loss: 0.6929421268983795Epoch 231/300, Training Loss: 0.6927507262341569, Validation Loss: 0.6929421241461181Epoch 232/300, Training Loss: 0.6927507234738762, Validation Loss: 0.6929421213673349Epoch 233/300, Training Loss: 0.6927507206867451, Validation Loss: 0.6929421185616041Epoch 234/300, Training Loss: 0.6927507178723656, Validation Loss: 0.6929421157284925Epoch 235/300, Training Loss: 0.6927507150303263, Validation Loss: 0.6929421128675608Epoch 236/300, Training Loss: 0.6927507121602117, Validation Loss: 0.6929421099783618Epoch 237/300, Training Loss: 0.6927507092616024, Validation Loss: 0.6929421070604437Epoch 238/300, Training Loss: 0.6927507063340684, Validation Loss: 0.6929421041133441Epoch 239/300, Training Loss: 0.6927507033771751, Validation Loss: 0.6929421011365956Epoch 240/300, Training Loss: 0.6927507003904818, Validation Loss: 0.6929420981297217Epoch 241/300, Training Loss: 0.6927506973735397, Validation Loss: 0.69294209509224Epoch 242/300, Training Loss: 0.6927506943258923, Validation Loss: 0.692942092023659Epoch 243/300, Training Loss: 0.6927506912470769, Validation Loss: 0.6929420889234794Epoch 244/300, Training Loss: 0.6927506881366242, Validation Loss: 0.6929420857911945Epoch 245/300, Training Loss: 0.6927506849940548, Validation Loss: 0.6929420826262875Epoch 246/300, Training Loss: 0.692750681818884, Validation Loss: 0.6929420794282366Epoch 247/300, Training Loss: 0.6927506786106196, Validation Loss: 0.6929420761965073Epoch 248/300, Training Loss: 0.6927506753687586, Validation Loss: 0.6929420729305589Epoch 249/300, Training Loss: 0.6927506720927925, Validation Loss: 0.6929420696298406Epoch 250/300, Training Loss: 0.6927506687822031, Validation Loss: 0.6929420662937945Epoch 251/300, Training Loss: 0.692750665436466, Validation Loss: 0.6929420629218497Epoch 252/300, Training Loss: 0.6927506620550448, Validation Loss: 0.6929420595134291Epoch 253/300, Training Loss: 0.6927506586373974, Validation Loss: 0.6929420560679453Epoch 254/300, Training Loss: 0.6927506551829705, Validation Loss: 0.6929420525847996Epoch 255/300, Training Loss: 0.6927506516912048, Validation Loss: 0.692942049063384Epoch 256/300, Training Loss: 0.692750648161527, Validation Loss: 0.6929420455030805Epoch 257/300, Training Loss: 0.6927506445933592, Validation Loss: 0.6929420419032606Epoch 258/300, Training Loss: 0.6927506409861096, Validation Loss: 0.6929420382632839Epoch 259/300, Training Loss: 0.6927506373391825, Validation Loss: 0.6929420345825018Epoch 260/300, Training Loss: 0.6927506336519649, Validation Loss: 0.6929420308602513Epoch 261/300, Training Loss: 0.6927506299238378, Validation Loss: 0.6929420270958604Epoch 262/300, Training Loss: 0.6927506261541715, Validation Loss: 0.6929420232886431Epoch 263/300, Training Loss: 0.6927506223423249, Validation Loss: 0.6929420194379051Epoch 264/300, Training Loss: 0.692750618487648, Validation Loss: 0.6929420155429362Epoch 265/300, Training Loss: 0.6927506145894748, Validation Loss: 0.6929420116030166Epoch 266/300, Training Loss: 0.692750610647135, Validation Loss: 0.692942007617413Epoch 267/300, Training Loss: 0.6927506066599401, Validation Loss: 0.6929420035853779Epoch 268/300, Training Loss: 0.6927506026271958, Validation Loss: 0.6929419995061539Epoch 269/300, Training Loss: 0.6927505985481911, Validation Loss: 0.6929419953789667Epoch 270/300, Training Loss: 0.6927505944222053, Validation Loss: 0.6929419912030303Epoch 271/300, Training Loss: 0.6927505902485044, Validation Loss: 0.6929419869775446Epoch 272/300, Training Loss: 0.692750586026342, Validation Loss: 0.6929419827016944Epoch 273/300, Training Loss: 0.6927505817549577, Validation Loss: 0.6929419783746507Epoch 274/300, Training Loss: 0.6927505774335798, Validation Loss: 0.6929419739955696Epoch 275/300, Training Loss: 0.6927505730614232, Validation Loss: 0.6929419695635919Epoch 276/300, Training Loss: 0.6927505686376855, Validation Loss: 0.6929419650778424Epoch 277/300, Training Loss: 0.6927505641615542, Validation Loss: 0.6929419605374307Epoch 278/300, Training Loss: 0.6927505596322001, Validation Loss: 0.6929419559414508Epoch 279/300, Training Loss: 0.6927505550487794, Validation Loss: 0.692941951288979Epoch 280/300, Training Loss: 0.6927505504104373, Validation Loss: 0.6929419465790747Epoch 281/300, Training Loss: 0.6927505457162962, Validation Loss: 0.6929419418107812Epoch 282/300, Training Loss: 0.69275054096547, Validation Loss: 0.6929419369831235Epoch 283/300, Training Loss: 0.6927505361570525, Validation Loss: 0.692941932095109Epoch 284/300, Training Loss: 0.6927505312901238, Validation Loss: 0.6929419271457256Epoch 285/300, Training Loss: 0.6927505263637457, Validation Loss: 0.6929419221339443Epoch 286/300, Training Loss: 0.6927505213769627, Validation Loss: 0.692941917058716Epoch 287/300, Training Loss: 0.6927505163288035, Validation Loss: 0.6929419119189716Epoch 288/300, Training Loss: 0.6927505112182792, Validation Loss: 0.6929419067136225Epoch 289/300, Training Loss: 0.6927505060443815, Validation Loss: 0.6929419014415592Epoch 290/300, Training Loss: 0.6927505008060841, Validation Loss: 0.6929418961016528Epoch 291/300, Training Loss: 0.6927504955023426, Validation Loss: 0.6929418906927506Epoch 292/300, Training Loss: 0.6927504901320917, Validation Loss: 0.6929418852136806Epoch 293/300, Training Loss: 0.692750484694249, Validation Loss: 0.6929418796632463Epoch 294/300, Training Loss: 0.6927504791877103, Validation Loss: 0.69294187404023Epoch 295/300, Training Loss: 0.6927504736113511, Validation Loss: 0.6929418683433899Epoch 296/300, Training Loss: 0.6927504679640263, Validation Loss: 0.6929418625714611Epoch 297/300, Training Loss: 0.6927504622445674, Validation Loss: 0.6929418567231529Epoch 298/300, Training Loss: 0.6927504564517879, Validation Loss: 0.6929418507971516Epoch 299/300, Training Loss: 0.6927504505844768, Validation Loss: 0.6929418447921166Epoch 300/300, Training Loss: 0.6927504446413996, Validation Loss: 0.6929418387066815Epoch 1/500, Training Loss: 0.6936282423031431, Validation Loss: 0.6931506525167531Epoch 2/500, Training Loss: 0.6933246963587925, Validation Loss: 0.6929952798284581Epoch 3/500, Training Loss: 0.693123757811368, Validation Loss: 0.6929067491623394Epoch 4/500, Training Loss: 0.692991119261966, Validation Loss: 0.6928601762115302Epoch 5/500, Training Loss: 0.6929038738908959, Validation Loss: 0.692839404553555Epoch 6/500, Training Loss: 0.6928467438612974, Validation Loss: 0.6928340318930195Epoch 7/500, Training Loss: 0.6928095498806311, Validation Loss: 0.692837432931372Epoch 8/500, Training Loss: 0.692785517143399, Validation Loss: 0.6928454490945823Epoch 9/500, Training Loss: 0.6927701426129259, Validation Loss: 0.6928555227922483Epoch 10/500, Training Loss: 0.692760438180698, Validation Loss: 0.6928661272136658Epoch 11/500, Training Loss: 0.6927544252020563, Validation Loss: 0.6928763921793817Epoch 12/500, Training Loss: 0.6927507970381633, Validation Loss: 0.6928858597835694Epoch 13/500, Training Loss: 0.6927486938497361, Validation Loss: 0.6928943257622316Epoch 14/500, Training Loss: 0.6927475523786987, Validation Loss: 0.6929017373254462Epoch 15/500, Training Loss: 0.6927470058175754, Validation Loss: 0.6929081280492566Epoch 16/500, Training Loss: 0.6927468171282538, Validation Loss: 0.6929135769804926Epoch 17/500, Training Loss: 0.6927468346918443, Validation Loss: 0.6929181834667864Epoch 18/500, Training Loss: 0.692746962859807, Validation Loss: 0.6929220521188595Epoch 19/500, Training Loss: 0.692747142441619, Validation Loss: 0.6929252842324961Epoch 20/500, Training Loss: 0.6927473378119762, Validation Loss: 0.6929279732695746Epoch 21/500, Training Loss: 0.6927475284221556, Validation Loss: 0.6929302028383539Epoch 22/500, Training Loss: 0.6927477032366305, Validation Loss: 0.6929320461675451Epoch 23/500, Training Loss: 0.6927478571084202, Validation Loss: 0.6929335664328988Epoch 24/500, Training Loss: 0.6927479884357718, Validation Loss: 0.6929348175331963Epoch 25/500, Training Loss: 0.6927480976626346, Validation Loss: 0.6929358450673502Epoch 26/500, Training Loss: 0.6927481863322726, Validation Loss: 0.6929366873641497Epoch 27/500, Training Loss: 0.6927482565013098, Validation Loss: 0.692937376479886Epoch 28/500, Training Loss: 0.6927483103868467, Validation Loss: 0.6929379391191348Epoch 29/500, Training Loss: 0.6927483501627327, Validation Loss: 0.6929383974586603Epoch 30/500, Training Loss: 0.6927483778500207, Validation Loss: 0.6929387698691645Epoch 31/500, Training Loss: 0.6927483952657074, Validation Loss: 0.6929390715380379Epoch 32/500, Training Loss: 0.6927484040066322, Validation Loss: 0.6929393150007146Epoch 33/500, Training Loss: 0.6927484054536502, Validation Loss: 0.6929395105902394Epoch 34/500, Training Loss: 0.6927484007867248, Validation Loss: 0.6929396668151788Epoch 35/500, Training Loss: 0.6927483910051379, Validation Loss: 0.6929397906757068Epoch 36/500, Training Loss: 0.692748376949323, Validation Loss: 0.6929398879269242Epoch 37/500, Training Loss: 0.692748359322283, Validation Loss: 0.6929399632975427Epoch 38/500, Training Loss: 0.69274833870953, Validation Loss: 0.6929400206710434Epoch 39/500, Training Loss: 0.6927483155970097, Validation Loss: 0.6929400632354452Epoch 40/500, Training Loss: 0.6927482903868736, Validation Loss: 0.6929400936069284Epoch 41/500, Training Loss: 0.6927482634111135, Validation Loss: 0.6929401139317426Epoch 42/500, Training Loss: 0.692748234943226, Validation Loss: 0.6929401259701232Epoch 43/500, Training Loss: 0.6927482052080942, Validation Loss: 0.6929401311653376Epoch 44/500, Training Loss: 0.6927481743903017, Validation Loss: 0.6929401307004555Epoch 45/500, Training Loss: 0.6927481426410969, Validation Loss: 0.6929401255449984Epoch 46/500, Training Loss: 0.6927481100841947, Validation Loss: 0.6929401164932584Epoch 47/500, Training Loss: 0.6927480768205945, Validation Loss: 0.6929401041957595Epoch 48/500, Training Loss: 0.6927480429325856, Validation Loss: 0.6929400891850807Epoch 49/500, Training Loss: 0.6927480084870413, Validation Loss: 0.6929400718970499Epoch 50/500, Training Loss: 0.6927479735381514, Validation Loss: 0.6929400526881306Epoch 51/500, Training Loss: 0.6927479381296733, Validation Loss: 0.6929400318496846Epoch 52/500, Training Loss: 0.6927479022967804, Validation Loss: 0.6929400096196725Epoch 53/500, Training Loss: 0.6927478660675848, Validation Loss: 0.6929399861922481Epoch 54/500, Training Loss: 0.6927478294643925, Validation Loss: 0.6929399617256241Epoch 55/500, Training Loss: 0.692747792504729, Validation Loss: 0.692939936348529Epoch 56/500, Training Loss: 0.6927477552021889, Validation Loss: 0.6929399101654945Epoch 57/500, Training Loss: 0.6927477175671249, Validation Loss: 0.6929398832611926Epoch 58/500, Training Loss: 0.6927476796072163, Validation Loss: 0.6929398557039927Epoch 59/500, Training Loss: 0.6927476413279394, Validation Loss: 0.6929398275488767Epoch 60/500, Training Loss: 0.6927476027329458, Validation Loss: 0.6929397988398295Epoch 61/500, Training Loss: 0.6927475638243765, Validation Loss: 0.6929397696117964Epoch 62/500, Training Loss: 0.6927475246031194, Validation Loss: 0.6929397398922899Epoch 63/500, Training Loss: 0.6927474850690194, Validation Loss: 0.6929397097027061Epoch 64/500, Training Loss: 0.6927474452210474, Validation Loss: 0.6929396790594026Epoch 65/500, Training Loss: 0.6927474050574465, Validation Loss: 0.6929396479745825Epoch 66/500, Training Loss: 0.6927473645758406, Validation Loss: 0.6929396164570166Epoch 67/500, Training Loss: 0.6927473237733301, Validation Loss: 0.69293958451264Epoch 68/500, Training Loss: 0.6927472826465711, Validation Loss: 0.692939552145034Epoch 69/500, Training Loss: 0.6927472411918336, Validation Loss: 0.6929395193558265Epoch 70/500, Training Loss: 0.6927471994050544, Validation Loss: 0.6929394861450161Epoch 71/500, Training Loss: 0.6927471572818776, Validation Loss: 0.6929394525112371Epoch 72/500, Training Loss: 0.6927471148176867, Validation Loss: 0.6929394184519795Epoch 73/500, Training Loss: 0.6927470720076319, Validation Loss: 0.6929393839637635Epoch 74/500, Training Loss: 0.6927470288466526, Validation Loss: 0.6929393490422887Epoch 75/500, Training Loss: 0.6927469853294914, Validation Loss: 0.6929393136825467Epoch 76/500, Training Loss: 0.6927469414507106, Validation Loss: 0.6929392778789223Epoch 77/500, Training Loss: 0.6927468972047004, Validation Loss: 0.6929392416252679Epoch 78/500, Training Loss: 0.6927468525856895, Validation Loss: 0.6929392049149681Epoch 79/500, Training Loss: 0.6927468075877451, Validation Loss: 0.6929391677409912Epoch 80/500, Training Loss: 0.6927467622047863, Validation Loss: 0.6929391300959282Epoch 81/500, Training Loss: 0.6927467164305787, Validation Loss: 0.6929390919720269Epoch 82/500, Training Loss: 0.6927466702587399, Validation Loss: 0.6929390533612165Epoch 83/500, Training Loss: 0.6927466236827388, Validation Loss: 0.69293901425513Epoch 84/500, Training Loss: 0.6927465766958978, Validation Loss: 0.6929389746451171Epoch 85/500, Training Loss: 0.6927465292913905, Validation Loss: 0.6929389345222574Epoch 86/500, Training Loss: 0.6927464814622376, Validation Loss: 0.6929388938773685Epoch 87/500, Training Loss: 0.6927464332013126, Validation Loss: 0.6929388527010131Epoch 88/500, Training Loss: 0.6927463845013305, Validation Loss: 0.6929388109835013Epoch 89/500, Training Loss: 0.6927463353548483, Validation Loss: 0.6929387687148921Epoch 90/500, Training Loss: 0.6927462857542689, Validation Loss: 0.6929387258849965Epoch 91/500, Training Loss: 0.6927462356918256, Validation Loss: 0.6929386824833748Epoch 92/500, Training Loss: 0.6927461851595882, Validation Loss: 0.6929386384993326Epoch 93/500, Training Loss: 0.6927461341494556, Validation Loss: 0.6929385939219224Epoch 94/500, Training Loss: 0.6927460826531501, Validation Loss: 0.6929385487399367Epoch 95/500, Training Loss: 0.6927460306622153, Validation Loss: 0.6929385029419033Epoch 96/500, Training Loss: 0.6927459781680114, Validation Loss: 0.6929384565160819Epoch 97/500, Training Loss: 0.6927459251617104, Validation Loss: 0.6929384094504585Epoch 98/500, Training Loss: 0.6927458716342909, Validation Loss: 0.6929383617327359Epoch 99/500, Training Loss: 0.6927458175765319, Validation Loss: 0.6929383133503332Epoch 100/500, Training Loss: 0.6927457629790096, Validation Loss: 0.6929382642903716Epoch 101/500, Training Loss: 0.6927457078320904, Validation Loss: 0.6929382145396719Epoch 102/500, Training Loss: 0.692745652125927, Validation Loss: 0.6929381640847458Epoch 103/500, Training Loss: 0.6927455958504478, Validation Loss: 0.6929381129117859Epoch 104/500, Training Loss: 0.6927455389953563, Validation Loss: 0.6929380610066587Epoch 105/500, Training Loss: 0.6927454815501256, Validation Loss: 0.6929380083548945Epoch 106/500, Training Loss: 0.6927454235039818, Validation Loss: 0.6929379549416794Epoch 107/500, Training Loss: 0.6927453648459083, Validation Loss: 0.6929379007518437Epoch 108/500, Training Loss: 0.6927453055646354, Validation Loss: 0.692937845769855Epoch 109/500, Training Loss: 0.6927452456486309, Validation Loss: 0.6929377899798026Epoch 110/500, Training Loss: 0.692745185086092, Validation Loss: 0.6929377333653922Epoch 111/500, Training Loss: 0.6927451238649445, Validation Loss: 0.6929376759099314Epoch 112/500, Training Loss: 0.6927450619728248, Validation Loss: 0.6929376175963179Epoch 113/500, Training Loss: 0.6927449993970793, Validation Loss: 0.6929375584070302Epoch 114/500, Training Loss: 0.6927449361247506, Validation Loss: 0.6929374983241114Epoch 115/500, Training Loss: 0.6927448721425753, Validation Loss: 0.6929374373291597Epoch 116/500, Training Loss: 0.6927448074369671, Validation Loss: 0.6929373754033124Epoch 117/500, Training Loss: 0.6927447419940117, Validation Loss: 0.6929373125272338Epoch 118/500, Training Loss: 0.6927446757994565, Validation Loss: 0.6929372486811005Epoch 119/500, Training Loss: 0.6927446088387001, Validation Loss: 0.6929371838445857Epoch 120/500, Training Loss: 0.6927445410967805, Validation Loss: 0.6929371179968445Epoch 121/500, Training Loss: 0.6927444725583665, Validation Loss: 0.6929370511164981Epoch 122/500, Training Loss: 0.692744403207745, Validation Loss: 0.6929369831816171Epoch 123/500, Training Loss: 0.6927443330288083, Validation Loss: 0.692936914169705Epoch 124/500, Training Loss: 0.6927442620050479, Validation Loss: 0.6929368440576795Epoch 125/500, Training Loss: 0.6927441901195308, Validation Loss: 0.6929367728218544Epoch 126/500, Training Loss: 0.6927441173548977, Validation Loss: 0.692936700437922Epoch 127/500, Training Loss: 0.692744043693343, Validation Loss: 0.6929366268809309Epoch 128/500, Training Loss: 0.6927439691166036, Validation Loss: 0.6929365521252687Epoch 129/500, Training Loss: 0.6927438936059441, Validation Loss: 0.6929364761446378Epoch 130/500, Training Loss: 0.6927438171421372, Validation Loss: 0.6929363989120361Epoch 131/500, Training Loss: 0.6927437397054593, Validation Loss: 0.6929363203997315Epoch 132/500, Training Loss: 0.6927436612756623, Validation Loss: 0.6929362405792416Epoch 133/500, Training Loss: 0.6927435818319618, Validation Loss: 0.692936159421307Epoch 134/500, Training Loss: 0.6927435013530234, Validation Loss: 0.6929360768958657Epoch 135/500, Training Loss: 0.692743419816938, Validation Loss: 0.6929359929720288Epoch 136/500, Training Loss: 0.6927433372012068, Validation Loss: 0.6929359076180507Epoch 137/500, Training Loss: 0.6927432534827233, Validation Loss: 0.6929358208013028Epoch 138/500, Training Loss: 0.6927431686377498, Validation Loss: 0.6929357324882417Epoch 139/500, Training Loss: 0.6927430826418972, Validation Loss: 0.6929356426443817Epoch 140/500, Training Loss: 0.6927429954701056, Validation Loss: 0.6929355512342604Epoch 141/500, Training Loss: 0.6927429070966191, Validation Loss: 0.6929354582214065Epoch 142/500, Training Loss: 0.6927428174949651, Validation Loss: 0.6929353635683058Epoch 143/500, Training Loss: 0.6927427266379261, Validation Loss: 0.6929352672363654Epoch 144/500, Training Loss: 0.6927426344975174, Validation Loss: 0.6929351691858759Epoch 145/500, Training Loss: 0.6927425410449607, Validation Loss: 0.6929350693759726Epoch 146/500, Training Loss: 0.6927424462506561, Validation Loss: 0.6929349677645982Epoch 147/500, Training Loss: 0.6927423500841509, Validation Loss: 0.6929348643084574Epoch 148/500, Training Loss: 0.6927422525141171, Validation Loss: 0.6929347589629751Epoch 149/500, Training Loss: 0.692742153508311, Validation Loss: 0.6929346516822519Epoch 150/500, Training Loss: 0.6927420530335491, Validation Loss: 0.6929345424190158Epoch 151/500, Training Loss: 0.6927419510556709, Validation Loss: 0.6929344311245733Epoch 152/500, Training Loss: 0.6927418475395064, Validation Loss: 0.6929343177487596Epoch 153/500, Training Loss: 0.6927417424488354, Validation Loss: 0.692934202239885Epoch 154/500, Training Loss: 0.6927416357463559, Validation Loss: 0.6929340845446779Epoch 155/500, Training Loss: 0.6927415273936386, Validation Loss: 0.6929339646082298Epoch 156/500, Training Loss: 0.6927414173510934, Validation Loss: 0.6929338423739334Epoch 157/500, Training Loss: 0.6927413055779166, Validation Loss: 0.69293371778342Epoch 158/500, Training Loss: 0.6927411920320568, Validation Loss: 0.6929335907764956Epoch 159/500, Training Loss: 0.6927410766701605, Validation Loss: 0.6929334612910717Epoch 160/500, Training Loss: 0.6927409594475284, Validation Loss: 0.6929333292630935Epoch 161/500, Training Loss: 0.692740840318063, Validation Loss: 0.6929331946264674Epoch 162/500, Training Loss: 0.6927407192342174, Validation Loss: 0.6929330573129818Epoch 163/500, Training Loss: 0.6927405961469348, Validation Loss: 0.6929329172522286Epoch 164/500, Training Loss: 0.6927404710055995, Validation Loss: 0.692932774371516Epoch 165/500, Training Loss: 0.6927403437579693, Validation Loss: 0.6929326285957834Epoch 166/500, Training Loss: 0.6927402143501181, Validation Loss: 0.6929324798475062Epoch 167/500, Training Loss: 0.6927400827263662, Validation Loss: 0.6929323280466015Epoch 168/500, Training Loss: 0.6927399488292149, Validation Loss: 0.6929321731103284Epoch 169/500, Training Loss: 0.6927398125992719, Validation Loss: 0.6929320149531795Epoch 170/500, Training Loss: 0.6927396739751792, Validation Loss: 0.692931853486775Epoch 171/500, Training Loss: 0.6927395328935326, Validation Loss: 0.6929316886197443Epoch 172/500, Training Loss: 0.6927393892888001, Validation Loss: 0.6929315202576072Epoch 173/500, Training Loss: 0.692739243093239, Validation Loss: 0.6929313483026478Epoch 174/500, Training Loss: 0.692739094236799, Validation Loss: 0.6929311726537827Epoch 175/500, Training Loss: 0.6927389426470353, Validation Loss: 0.6929309932064223Epoch 176/500, Training Loss: 0.6927387882490046, Validation Loss: 0.6929308098523264Epoch 177/500, Training Loss: 0.6927386309651665, Validation Loss: 0.6929306224794523Epoch 178/500, Training Loss: 0.692738470715272, Validation Loss: 0.6929304309717969Epoch 179/500, Training Loss: 0.6927383074162489, Validation Loss: 0.6929302352092289Epoch 180/500, Training Loss: 0.6927381409820862, Validation Loss: 0.6929300350673137Epoch 181/500, Training Loss: 0.6927379713237054, Validation Loss: 0.6929298304171313Epoch 182/500, Training Loss: 0.6927377983488345, Validation Loss: 0.6929296211250847Epoch 183/500, Training Loss: 0.6927376219618622, Validation Loss: 0.6929294070526951Epoch 184/500, Training Loss: 0.6927374420637016, Validation Loss: 0.6929291880563917Epoch 185/500, Training Loss: 0.6927372585516325, Validation Loss: 0.6929289639872894Epoch 186/500, Training Loss: 0.6927370713191456, Validation Loss: 0.6929287346909536Epoch 187/500, Training Loss: 0.6927368802557715, Validation Loss: 0.6929285000071553Epoch 188/500, Training Loss: 0.6927366852469075, Validation Loss: 0.6929282597696114Epoch 189/500, Training Loss: 0.69273648617363, Validation Loss: 0.6929280138057143Epoch 190/500, Training Loss: 0.6927362829124979, Validation Loss: 0.6929277619362446Epoch 191/500, Training Loss: 0.6927360753353515, Validation Loss: 0.6929275039750696Epoch 192/500, Training Loss: 0.6927358633090932, Validation Loss: 0.6929272397288284Epoch 193/500, Training Loss: 0.6927356466954581, Validation Loss: 0.6929269689965949Epoch 194/500, Training Loss: 0.6927354253507791, Validation Loss: 0.6929266915695307Epoch 195/500, Training Loss: 0.6927351991257273, Validation Loss: 0.6929264072305084Epoch 196/500, Training Loss: 0.6927349678650524, Validation Loss: 0.6929261157537265Epoch 197/500, Training Loss: 0.6927347314072934, Validation Loss: 0.6929258169042931Epoch 198/500, Training Loss: 0.6927344895844904, Validation Loss: 0.692925510437795Epoch 199/500, Training Loss: 0.6927342422218633, Validation Loss: 0.6929251960998348Epoch 200/500, Training Loss: 0.6927339891374845, Validation Loss: 0.6929248736255482Epoch 201/500, Training Loss: 0.69273373014193, Validation Loss: 0.6929245427390938Epoch 202/500, Training Loss: 0.6927334650379084, Validation Loss: 0.6929242031531089Epoch 203/500, Training Loss: 0.6927331936198716, Validation Loss: 0.6929238545681395Epoch 204/500, Training Loss: 0.6927329156735982, Validation Loss: 0.692923496672037Epoch 205/500, Training Loss: 0.6927326309757615, Validation Loss: 0.6929231291393155Epoch 206/500, Training Loss: 0.6927323392934635, Validation Loss: 0.6929227516304783Epoch 207/500, Training Loss: 0.6927320403837443, Validation Loss: 0.6929223637912973Epoch 208/500, Training Loss: 0.6927317339930639, Validation Loss: 0.6929219652520568Epoch 209/500, Training Loss: 0.6927314198567556, Validation Loss: 0.6929215556267474Epoch 210/500, Training Loss: 0.6927310976984331, Validation Loss: 0.6929211345122127Epoch 211/500, Training Loss: 0.6927307672293821, Validation Loss: 0.6929207014872432Epoch 212/500, Training Loss: 0.6927304281478959, Validation Loss: 0.6929202561116199Epoch 213/500, Training Loss: 0.6927300801385805, Validation Loss: 0.6929197979250892Epoch 214/500, Training Loss: 0.6927297228716143, Validation Loss: 0.6929193264462838Epoch 215/500, Training Loss: 0.6927293560019573, Validation Loss: 0.692918841171571Epoch 216/500, Training Loss: 0.6927289791685173, Validation Loss: 0.6929183415738277Epoch 217/500, Training Loss: 0.692728591993254, Validation Loss: 0.6929178271011402Epoch 218/500, Training Loss: 0.6927281940802372, Validation Loss: 0.6929172971754178Epoch 219/500, Training Loss: 0.6927277850146275, Validation Loss: 0.6929167511909194Epoch 220/500, Training Loss: 0.6927273643616085, Validation Loss: 0.6929161885126807Epoch 221/500, Training Loss: 0.6927269316652356, Validation Loss: 0.6929156084748394Epoch 222/500, Training Loss: 0.6927264864472092, Validation Loss: 0.6929150103788505Epoch 223/500, Training Loss: 0.6927260282055748, Validation Loss: 0.6929143934915782Epoch 224/500, Training Loss: 0.6927255564133213, Validation Loss: 0.6929137570432627Epoch 225/500, Training Loss: 0.692725070516895, Validation Loss: 0.6929131002253461Epoch 226/500, Training Loss: 0.6927245699346074, Validation Loss: 0.6929124221881484Epoch 227/500, Training Loss: 0.6927240540549274, Validation Loss: 0.6929117220383848Epoch 228/500, Training Loss: 0.6927235222346584, Validation Loss: 0.6929109988365049Epoch 229/500, Training Loss: 0.6927229737969887, Validation Loss: 0.6929102515938472Epoch 230/500, Training Loss: 0.6927224080293941, Validation Loss: 0.6929094792695902Epoch 231/500, Training Loss: 0.6927218241813959, Validation Loss: 0.6929086807674814Epoch 232/500, Training Loss: 0.6927212214621552, Validation Loss: 0.6929078549323312Epoch 233/500, Training Loss: 0.6927205990378826, Validation Loss: 0.6929070005462469Epoch 234/500, Training Loss: 0.6927199560290687, Validation Loss: 0.6929061163245925Epoch 235/500, Training Loss: 0.6927192915074933, Validation Loss: 0.6929052009116407Epoch 236/500, Training Loss: 0.692718604493019, Validation Loss: 0.6929042528758994Epoch 237/500, Training Loss: 0.6927178939501287, Validation Loss: 0.6929032707050811Epoch 238/500, Training Loss: 0.6927171587842105, Validation Loss: 0.6929022528006863Epoch 239/500, Training Loss: 0.6927163978375355, Validation Loss: 0.6929011974721626Epoch 240/500, Training Loss: 0.6927156098849347, Validation Loss: 0.6929001029306077Epoch 241/500, Training Loss: 0.6927147936291191, Validation Loss: 0.6928989672819719Epoch 242/500, Training Loss: 0.6927139476956335, Validation Loss: 0.6928977885197142Epoch 243/500, Training Loss: 0.6927130706273952, Validation Loss: 0.6928965645168643Epoch 244/500, Training Loss: 0.6927121608787837, Validation Loss: 0.6928952930174356Epoch 245/500, Training Loss: 0.6927112168092441, Validation Loss: 0.6928939716271257Epoch 246/500, Training Loss: 0.6927102366763528, Validation Loss: 0.6928925978032416Epoch 247/500, Training Loss: 0.6927092186282913, Validation Loss: 0.6928911688437702Epoch 248/500, Training Loss: 0.6927081606956886, Validation Loss: 0.6928896818755181Epoch 249/500, Training Loss: 0.6927070607827367, Validation Loss: 0.6928881338412255Epoch 250/500, Training Loss: 0.6927059166575484, Validation Loss: 0.6928865214855524Epoch 251/500, Training Loss: 0.6927047259416501, Validation Loss: 0.6928848413398295Epoch 252/500, Training Loss: 0.6927034860985375, Validation Loss: 0.6928830897054455Epoch 253/500, Training Loss: 0.6927021944211939, Validation Loss: 0.6928812626357317Epoch 254/500, Training Loss: 0.6927008480184652, Validation Loss: 0.6928793559161933Epoch 255/500, Training Loss: 0.6926994438001775, Validation Loss: 0.6928773650429082Epoch 256/500, Training Loss: 0.6926979784608471, Validation Loss: 0.6928752851989054Epoch 257/500, Training Loss: 0.6926964484618602, Validation Loss: 0.6928731112283052Epoch 258/500, Training Loss: 0.6926948500119263, Validation Loss: 0.6928708376079801Epoch 259/500, Training Loss: 0.6926931790456439, Validation Loss: 0.6928684584164635Epoch 260/500, Training Loss: 0.6926914311999479, Validation Loss: 0.6928659672998025Epoch 261/500, Training Loss: 0.6926896017882178, Validation Loss: 0.6928633574340141Epoch 262/500, Training Loss: 0.692687685771763, Validation Loss: 0.6928606214837525Epoch 263/500, Training Loss: 0.6926856777284048, Validation Loss: 0.6928577515567615Epoch 264/500, Training Loss: 0.6926835718177996, Validation Loss: 0.6928547391536128Epoch 265/500, Training Loss: 0.6926813617431227, Validation Loss: 0.6928515751121741Epoch 266/500, Training Loss: 0.6926790407086882, Validation Loss: 0.6928482495461854Epoch 267/500, Training Loss: 0.692676601372997, Validation Loss: 0.6928447517772163Epoch 268/500, Training Loss: 0.692674035796658, Validation Loss: 0.6928410702592015Epoch 269/500, Training Loss: 0.6926713353845515, Validation Loss: 0.6928371924946257Epoch 270/500, Training Loss: 0.6926684908214915, Validation Loss: 0.6928331049412986Epoch 271/500, Training Loss: 0.6926654920005796, Validation Loss: 0.6928287929085288Epoch 272/500, Training Loss: 0.692662327943268, Validation Loss: 0.6928242404412983Epoch 273/500, Training Loss: 0.692658986710075, Validation Loss: 0.6928194301908758Epoch 274/500, Training Loss: 0.692655455300679, Validation Loss: 0.6928143432700427Epoch 275/500, Training Loss: 0.6926517195419641, Validation Loss: 0.692808959090842Epoch 276/500, Training Loss: 0.6926477639623557, Validation Loss: 0.6928032551824536Epoch 277/500, Training Loss: 0.6926435716505286, Validation Loss: 0.6927972069863981Epoch 278/500, Training Loss: 0.6926391240962705, Validation Loss: 0.6927907876258605Epoch 279/500, Training Loss: 0.6926344010109453, Validation Loss: 0.6927839676453909Epoch 280/500, Training Loss: 0.6926293801245416, Validation Loss: 0.692776714716639Epoch 281/500, Training Loss: 0.6926240369558524, Validation Loss: 0.6927689933050643Epoch 282/500, Training Loss: 0.6926183445517154, Validation Loss: 0.692760764291701Epoch 283/500, Training Loss: 0.6926122731905545, Validation Loss: 0.6927519845430635Epoch 284/500, Training Loss: 0.6926057900446614, Validation Loss: 0.6927426064210643Epoch 285/500, Training Loss: 0.6925988587946492, Validation Loss: 0.6927325772233923Epoch 286/500, Training Loss: 0.6925914391883604, Validation Loss: 0.6927218385430692Epoch 287/500, Training Loss: 0.6925834865350927, Validation Loss: 0.6927103255338625Epoch 288/500, Training Loss: 0.692574951124313, Validation Loss: 0.692697966065733Epoch 289/500, Training Loss: 0.6925657775560131, Validation Loss: 0.6926846797515233Epoch 290/500, Training Loss: 0.6925559039673502, Validation Loss: 0.6926703768224484Epoch 291/500, Training Loss: 0.692545261137272, Validation Loss: 0.6926549568255724Epoch 292/500, Training Loss: 0.6925337714471393, Validation Loss: 0.692638307111066Epoch 293/500, Training Loss: 0.6925213476709228, Validation Loss: 0.6926203010705018Epoch 294/500, Training Loss: 0.6925078915630929, Validation Loss: 0.692600796079387Epoch 295/500, Training Loss: 0.692493292205606, Validation Loss: 0.6925796310872181Epoch 296/500, Training Loss: 0.692477424067084, Validation Loss: 0.692556623786115Epoch 297/500, Training Loss: 0.6924601447170475, Validation Loss: 0.6925315672738884Epoch 298/500, Training Loss: 0.692441292125272, Validation Loss: 0.6925042261084897Epoch 299/500, Training Loss: 0.6924206814603754, Validation Loss: 0.6924743316271529Epoch 300/500, Training Loss: 0.6923981012818157, Validation Loss: 0.6924415763738752Epoch 301/500, Training Loss: 0.6923733089942454, Validation Loss: 0.6924056074414849Epoch 302/500, Training Loss: 0.6923460254014501, Validation Loss: 0.6923660184872318Epoch 303/500, Training Loss: 0.6923159281566899, Validation Loss: 0.6923223401206892Epoch 304/500, Training Loss: 0.6922826438548368, Validation Loss: 0.6922740282859683Epoch 305/500, Training Loss: 0.6922457384458116, Validation Loss: 0.692220450161774Epoch 306/500, Training Loss: 0.6922047055640377, Validation Loss: 0.6921608669759732Epoch 307/500, Training Loss: 0.6921589522591101, Validation Loss: 0.692094412967167Epoch 308/500, Training Loss: 0.692107781470608, Validation Loss: 0.6920200695123098Epoch 309/500, Training Loss: 0.6920503704044344, Validation Loss: 0.6919366331605901Epoch 310/500, Training Loss: 0.691985743724911, Validation Loss: 0.6918426759479877Epoch 311/500, Training Loss: 0.6919127401566445, Validation Loss: 0.6917364958847415Epoch 312/500, Training Loss: 0.6918299706668622, Validation Loss: 0.6916160548698542Epoch 313/500, Training Loss: 0.6917357658366131, Validation Loss: 0.6914789004387834Epoch 314/500, Training Loss: 0.6916281092796337, Validation Loss: 0.6913220666197455Epoch 315/500, Training Loss: 0.6915045529648637, Validation Loss: 0.6911419476618682Epoch 316/500, Training Loss: 0.6913621089533744, Validation Loss: 0.6909341363720605Epoch 317/500, Training Loss: 0.6911971102526915, Validation Loss: 0.690693216079975Epoch 318/500, Training Loss: 0.6910050310612988, Validation Loss: 0.6904124916099574Epoch 319/500, Training Loss: 0.6907802534139995, Validation Loss: 0.6900836397794914Epoch 320/500, Training Loss: 0.6905157628795472, Validation Loss: 0.6896962535051677Epoch 321/500, Training Loss: 0.6902027501855714, Validation Loss: 0.6892372451780808Epoch 322/500, Training Loss: 0.6898300881044356, Validation Loss: 0.6886900642041461Epoch 323/500, Training Loss: 0.6893836433289551, Validation Loss: 0.688033670347421Epoch 324/500, Training Loss: 0.6888453713361269, Validation Loss: 0.6872411892512497Epoch 325/500, Training Loss: 0.688192128938766, Validation Loss: 0.6862781611572016Epoch 326/500, Training Loss: 0.6873941262665753, Validation Loss: 0.6851002831773692Epoch 327/500, Training Loss: 0.6864129317641451, Validation Loss: 0.6836505494104548Epoch 328/500, Training Loss: 0.6851989490547682, Validation Loss: 0.681855729384194Epoch 329/500, Training Loss: 0.6836883167331572, Validation Loss: 0.6796222191525929Epoch 330/500, Training Loss: 0.6817992549720051, Validation Loss: 0.6768314705224483Epoch 331/500, Training Loss: 0.6794279849266454, Validation Loss: 0.6733354079342191Epoch 332/500, Training Loss: 0.676444358676533, Validation Loss: 0.6689521928927572Epoch 333/500, Training Loss: 0.6726868099139328, Validation Loss: 0.6634614286211297Epoch 334/500, Training Loss: 0.6679539217231749, Validation Loss: 0.6565929725690614Epoch 335/500, Training Loss: 0.6619831024659033, Validation Loss: 0.6479903800148678Epoch 336/500, Training Loss: 0.6543916545001851, Validation Loss: 0.6371047431565906Epoch 337/500, Training Loss: 0.6445329581513881, Validation Loss: 0.6229519145294885Epoch 338/500, Training Loss: 0.6312243267216612, Validation Loss: 0.6037386475359223Epoch 339/500, Training Loss: 0.6124745137659773, Validation Loss: 0.5767939171318286Epoch 340/500, Training Loss: 0.5859330929975239, Validation Loss: 0.5401094357058228Epoch 341/500, Training Loss: 0.551251700063433, Validation Loss: 0.4959748901889867Epoch 342/500, Training Loss: 0.512880213930354, Validation Loss: 0.45187262048476085Epoch 343/500, Training Loss: 0.4777799424483947, Validation Loss: 0.41461738967819173Epoch 344/500, Training Loss: 0.4496190748112867, Validation Loss: 0.38570945947803975Epoch 345/500, Training Loss: 0.42803350819873076, Validation Loss: 0.3631153677229445Epoch 346/500, Training Loss: 0.411647697529831, Validation Loss: 0.34455230925935504Epoch 347/500, Training Loss: 0.3993691085532199, Validation Loss: 0.3288513135926441Epoch 348/500, Training Loss: 0.3901595279667692, Validation Loss: 0.3159183034603822Epoch 349/500, Training Loss: 0.38302929340933567, Validation Loss: 0.3060238079431135Epoch 350/500, Training Loss: 0.3773867234165979, Validation Loss: 0.2989764737753428Epoch 351/500, Training Loss: 0.3730511797773718, Validation Loss: 0.2940181152700895Epoch 352/500, Training Loss: 0.3698734919116676, Validation Loss: 0.29029199383201826Epoch 353/500, Training Loss: 0.36754274945506177, Validation Loss: 0.2872088629210018Epoch 354/500, Training Loss: 0.3657139519236211, Validation Loss: 0.2846126659848305Epoch 355/500, Training Loss: 0.36413793878152567, Validation Loss: 0.2825314922896163Epoch 356/500, Training Loss: 0.3626646745315809, Validation Loss: 0.28085945142995394Epoch 357/500, Training Loss: 0.36123419164290627, Validation Loss: 0.27944014171178805Epoch 358/500, Training Loss: 0.3598482418650183, Validation Loss: 0.2781703498514616Epoch 359/500, Training Loss: 0.35851470573839705, Validation Loss: 0.2769920100369069Epoch 360/500, Training Loss: 0.35722889091048793, Validation Loss: 0.27586840736202556Epoch 361/500, Training Loss: 0.3559775959627149, Validation Loss: 0.2747721784706341Epoch 362/500, Training Loss: 0.3547469041132345, Validation Loss: 0.2736819124616975Epoch 363/500, Training Loss: 0.35352681804298136, Validation Loss: 0.27258190748102584Epoch 364/500, Training Loss: 0.35231224013866214, Validation Loss: 0.271462102182094Epoch 365/500, Training Loss: 0.351101976127269, Validation Loss: 0.2703173935627075Epoch 366/500, Training Loss: 0.3498972726629756, Validation Loss: 0.26914655865865267Epoch 367/500, Training Loss: 0.34870061783621625, Validation Loss: 0.2679511438179354Epoch 368/500, Training Loss: 0.3475149692070846, Validation Loss: 0.26673452748330323Epoch 369/500, Training Loss: 0.3463433337826297, Validation Loss: 0.2655012075345503Epoch 370/500, Training Loss: 0.3451885749773048, Validation Loss: 0.26425628554612784Epoch 371/500, Training Loss: 0.3440533436778025, Validation Loss: 0.263005098215896Epoch 372/500, Training Loss: 0.3429400656471132, Validation Loss: 0.26175295064684206Epoch 373/500, Training Loss: 0.3418509464801629, Validation Loss: 0.2605049188425086Epoch 374/500, Training Loss: 0.34078797523896537, Validation Loss: 0.259265701599747Epoch 375/500, Training Loss: 0.3397529203459757, Validation Loss: 0.25803951190321517Epoch 376/500, Training Loss: 0.3387473183406514, Validation Loss: 0.256830004020021Epoch 377/500, Training Loss: 0.33777245933716027, Validation Loss: 0.25564023492514143Epoch 378/500, Training Loss: 0.336829373732516, Validation Loss: 0.2544726583652632Epoch 379/500, Training Loss: 0.3359188239365703, Validation Loss: 0.25332914808697543Epoch 380/500, Training Loss: 0.33504130344989247, Validation Loss: 0.2522110447647302Epoch 381/500, Training Loss: 0.334197044093169, Validation Loss: 0.2511192198229624Epoch 382/500, Training Loss: 0.33338603094474006, Validation Loss: 0.2500541490014591Epoch 383/500, Training Loss: 0.3326080237146434, Validation Loss: 0.24901598908847178Epoch 384/500, Training Loss: 0.33186258286827364, Validation Loss: 0.24800465243818598Epoch 385/500, Training Loss: 0.33114909872339404, Validation Loss: 0.2470198753471755Epoch 386/500, Training Loss: 0.3304668218691875, Validation Loss: 0.24606127780453108Epoch 387/500, Training Loss: 0.32981489349526605, Validation Loss: 0.245128413376308Epoch 388/500, Training Loss: 0.32919237449928407, Validation Loss: 0.24422080896000536Epoch 389/500, Training Loss: 0.3285982725194457, Validation Loss: 0.24333799484253174Epoch 390/500, Training Loss: 0.32803156629064173, Validation Loss: 0.2424795259482459Epoch 391/500, Training Loss: 0.3274912269422457, Validation Loss: 0.24164499542058512Epoch 392/500, Training Loss: 0.32697623604138265, Validation Loss: 0.24083404179087098Epoch 393/500, Training Loss: 0.3264856003402471, Validation Loss: 0.2400463509942224Epoch 394/500, Training Loss: 0.32601836331262846, Validation Loss: 0.23928165442953864Epoch 395/500, Training Loss: 0.3255736136654685, Validation Loss: 0.23853972415428398Epoch 396/500, Training Loss: 0.32515049108758537, Validation Loss: 0.23782036617410912Epoch 397/500, Training Loss: 0.32474818955094137, Validation Loss: 0.2371234126455747Epoch 398/500, Training Loss: 0.3243659585114392, Validation Loss: 0.23644871366656525Epoch 399/500, Training Loss: 0.3240031023680404, Validation Loss: 0.23579612918978077Epoch 400/500, Training Loss: 0.3236589785335151, Validation Loss: 0.23516552146415917Epoch 401/500, Training Loss: 0.3233329944504234, Validation Loss: 0.23455674829013215Epoch 402/500, Training Loss: 0.32302460385544257, Validation Loss: 0.23396965726907287Epoch 403/500, Training Loss: 0.32273330255754923, Validation Loss: 0.2334040811364239Epoch 404/500, Training Loss: 0.3224586239543184, Validation Loss: 0.2328598341924409Epoch 405/500, Training Loss: 0.32220013446892104, Validation Loss: 0.23233670978455523Epoch 406/500, Training Loss: 0.32195742905075897, Validation Loss: 0.23183447875083293Epoch 407/500, Training Loss: 0.3217301268470508, Validation Loss: 0.23135288870424323Epoch 408/500, Training Loss: 0.3215178671220832, Validation Loss: 0.23089166402121722Epoch 409/500, Training Loss: 0.3213203054757569, Validation Loss: 0.23045050639360953Epoch 410/500, Training Loss: 0.3211371103932526, Validation Loss: 0.23002909580847763Epoch 411/500, Training Loss: 0.32096796014254647, Validation Loss: 0.22962709183266408Epoch 412/500, Training Loss: 0.32081254002523163, Validation Loss: 0.22924413509645705Epoch 413/500, Training Loss: 0.32067053997779993, Validation Loss: 0.22887984889022617Epoch 414/500, Training Loss: 0.3205416525143267, Validation Loss: 0.2285338408078026Epoch 415/500, Training Loss: 0.32042557099679186, Validation Loss: 0.22820570438889462Epoch 416/500, Training Loss: 0.32032198821556634, Validation Loss: 0.22789502072886503Epoch 417/500, Training Loss: 0.32023059525968256, Validation Loss: 0.22760136003722348Epoch 418/500, Training Loss: 0.32015108065424264, Validation Loss: 0.22732428313597747Epoch 419/500, Training Loss: 0.3200831297407287, Validation Loss: 0.22706334289580807Epoch 420/500, Training Loss: 0.3200264242750323, Validation Loss: 0.22681808561224842Epoch 421/500, Training Loss: 0.31998064221777717, Validation Loss: 0.2265880523261831Epoch 422/500, Training Loss: 0.31994545769191307, Validation Loss: 0.22637278009360884Epoch 423/500, Training Loss: 0.31992054108359375, Validation Loss: 0.22617180320916383Epoch 424/500, Training Loss: 0.31990555926387343, Validation Loss: 0.2259846543869199Epoch 425/500, Training Loss: 0.31990017591069353, Validation Loss: 0.22581086590061344Epoch 426/500, Training Loss: 0.3199040519127835, Validation Loss: 0.22564997068414958Epoch 427/500, Training Loss: 0.31991684583938207, Validation Loss: 0.22550150339199Epoch 428/500, Training Loss: 0.31993821446193066, Validation Loss: 0.22536500141805232Epoch 429/500, Training Loss: 0.3199678133159802, Validation Loss: 0.2252400058710359Epoch 430/500, Training Loss: 0.32000529729345334, Validation Loss: 0.22512606250370074Epoch 431/500, Training Loss: 0.32005032125698796, Validation Loss: 0.2250227225935252Epoch 432/500, Training Loss: 0.32010254066938726, Validation Loss: 0.22492954377234722Epoch 433/500, Training Loss: 0.3201616122321858, Validation Loss: 0.22484609080298273Epoch 434/500, Training Loss: 0.3202271945280363, Validation Loss: 0.2247719363013534Epoch 435/500, Training Loss: 0.32029894866205566, Validation Loss: 0.22470666140327245Epoch 436/500, Training Loss: 0.3203765388975285, Validation Loss: 0.22464985637563245Epoch 437/500, Training Loss: 0.3204596332814539, Validation Loss: 0.22460112117224748Epoch 438/500, Training Loss: 0.3205479042554252, Validation Loss: 0.2245600659349713Epoch 439/500, Training Loss: 0.32064102924732996, Validation Loss: 0.22452631144088303Epoch 440/500, Training Loss: 0.3207386912393402, Validation Loss: 0.22449948949631096Epoch 441/500, Training Loss: 0.3208405793077378, Validation Loss: 0.22447924327825733Epoch 442/500, Training Loss: 0.3209463891302774, Validation Loss: 0.22446522762343163Epoch 443/500, Training Loss: 0.321055823457086, Validation Loss: 0.22445710926464413Epoch 444/500, Training Loss: 0.3211685925415084, Validation Loss: 0.22445456701383618Epoch 445/500, Training Loss: 0.3212844145278962, Validation Loss: 0.2244572918905657Epoch 446/500, Training Loss: 0.32140301579402364, Validation Loss: 0.22446498719441435Epoch 447/500, Training Loss: 0.3215241312466526, Validation Loss: 0.22447736851955855Epoch 448/500, Training Loss: 0.32164750456969005, Validation Loss: 0.22449416370969377Epoch 449/500, Training Loss: 0.321772888425372, Validation Loss: 0.22451511275164285Epoch 450/500, Training Loss: 0.32190004460993954, Validation Loss: 0.22453996760629796Epoch 451/500, Training Loss: 0.3220287441662914, Validation Loss: 0.22456849197605083Epoch 452/500, Training Loss: 0.32215876745707794, Validation Loss: 0.22460046100851544Epoch 453/500, Training Loss: 0.32228990420257936, Validation Loss: 0.22463566093712614Epoch 454/500, Training Loss: 0.32242195348848773, Validation Loss: 0.22467388866006113Epoch 455/500, Training Loss: 0.3225547237492904, Validation Loss: 0.2247149512598509Epoch 456/500, Training Loss: 0.3226880327333585, Validation Loss: 0.22475866546697362Epoch 457/500, Training Loss: 0.32282170745602123, Validation Loss: 0.22480485707164655Epoch 458/500, Training Loss: 0.32295558414682646, Validation Loss: 0.22485336028890157Epoch 459/500, Training Loss: 0.32308950819686855, Validation Loss: 0.22490401708282706Epoch 460/500, Training Loss: 0.32322333411148263, Validation Loss: 0.2249566764565749Epoch 461/500, Training Loss: 0.32335692547276706, Validation Loss: 0.22501119371534445Epoch 462/500, Training Loss: 0.3234901549153363, Validation Loss: 0.22506742971005508Epoch 463/500, Training Loss: 0.3236229041174318, Validation Loss: 0.22512525006982922Epoch 464/500, Training Loss: 0.323755063808086, Validation Loss: 0.2251845244316874Epoch 465/500, Training Loss: 0.32388653378947707, Validation Loss: 0.22524512567606453Epoch 466/500, Training Loss: 0.3240172229719832, Validation Loss: 0.2253069291768528Epoch 467/500, Training Loss: 0.32414704941783107, Validation Loss: 0.22536981207471615Epoch 468/500, Training Loss: 0.3242759403876735, Validation Loss: 0.2254336525823573Epoch 469/500, Training Loss: 0.32440383238301795, Validation Loss: 0.2254983293303101Epoch 470/500, Training Loss: 0.3245306711762258, Validation Loss: 0.22556372076162395Epoch 471/500, Training Loss: 0.3246564118189056, Validation Loss: 0.2256297045835313Epoch 472/500, Training Loss: 0.3247810186189649, Validation Loss: 0.22569615728379872Epoch 473/500, Training Loss: 0.32490446507645443, Validation Loss: 0.22576295371895888Epoch 474/500, Training Loss: 0.3250267337686722, Validation Loss: 0.22582996678095507Epoch 475/500, Training Loss: 0.32514781617580496, Validation Loss: 0.22589706714788807Epoch 476/500, Training Loss: 0.325267712439696, Validation Loss: 0.22596412312351624Epoch 477/500, Training Loss: 0.32538643105010906, Validation Loss: 0.22603100056887296Epoch 478/500, Training Loss: 0.32550398845505146, Validation Loss: 0.2260975629278735Epoch 479/500, Training Loss: 0.3256204085942664, Validation Loss: 0.22616367134704268Epoch 480/500, Training Loss: 0.32573572235778003, Validation Loss: 0.22622918488757385Epoch 481/500, Training Loss: 0.3258499669742859, Validation Loss: 0.2262939608258517Epoch 482/500, Training Loss: 0.3259631853369992, Validation Loss: 0.22635785503642616Epoch 483/500, Training Loss: 0.32607542527729677, Validation Loss: 0.22642072244928327Epoch 484/500, Training Loss: 0.3261867387987968, Validation Loss: 0.22648241757125046Epoch 485/500, Training Loss: 0.32629718128639573, Validation Loss: 0.2265427950595836Epoch 486/500, Training Loss: 0.32640681070607114, Validation Loss: 0.22660171033434176Epoch 487/500, Training Loss: 0.3265156868118625, Validation Loss: 0.22665902021515083Epoch 488/500, Training Loss: 0.3266238703763613, Validation Loss: 0.2267145835674551Epoch 489/500, Training Loss: 0.32673142246022, Validation Loss: 0.22676826194343125Epoch 490/500, Training Loss: 0.32683840373474393, Validation Loss: 0.22681992020336444Epoch 491/500, Training Loss: 0.32694487386957766, Validation Loss: 0.22686942710446067Epoch 492/500, Training Loss: 0.3270508909950188, Validation Loss: 0.2269166558457137Epoch 493/500, Training Loss: 0.32715651124571055, Validation Loss: 0.22696148455946963Epoch 494/500, Training Loss: 0.32726178838952497, Validation Loss: 0.22700379674260573Epoch 495/500, Training Loss: 0.32736677354255767, Validation Loss: 0.2270434816226446Epoch 496/500, Training Loss: 0.3274715149684219, Validation Loss: 0.22708043445650924Epoch 497/500, Training Loss: 0.32757605795759526, Validation Loss: 0.2271145567618779Epoch 498/500, Training Loss: 0.3276804447805554, Validation Loss: 0.22714575648310084Epoch 499/500, Training Loss: 0.32778471470686277, Validation Loss: 0.22717394809532937Epoch 500/500, Training Loss: 0.3278889040812956, Validation Loss: 0.22719905265182047Epoch 1/800, Training Loss: 0.6930732769413565, Validation Loss: 0.6928856927488375Epoch 2/800, Training Loss: 0.6929585095352923, Validation Loss: 0.6928501984951076Epoch 3/800, Training Loss: 0.6928836649886612, Validation Loss: 0.6928364833233606Epoch 4/800, Training Loss: 0.6928351129800974, Validation Loss: 0.6928354468954555Epoch 5/800, Training Loss: 0.6928038307972647, Validation Loss: 0.6928413656015618Epoch 6/800, Training Loss: 0.6927838546401827, Validation Loss: 0.6928507022990582Epoch 7/800, Training Loss: 0.6927712493374057, Validation Loss: 0.6928613250092452Epoch 8/800, Training Loss: 0.6927634237049277, Validation Loss: 0.6928719960951205Epoch 9/800, Training Loss: 0.692758675968833, Validation Loss: 0.6928820398335926Epoch 10/800, Training Loss: 0.6927558921594414, Validation Loss: 0.6928911273247061Epoch 11/800, Training Loss: 0.692754346146359, Validation Loss: 0.6928991383456806Epoch 12/800, Training Loss: 0.6927535671725448, Validation Loss: 0.6929060734817722Epoch 13/800, Training Loss: 0.6927532521925912, Validation Loss: 0.6929119989625884Epoch 14/800, Training Loss: 0.6927532079356612, Validation Loss: 0.6929170126526334Epoch 15/800, Training Loss: 0.6927533126765254, Validation Loss: 0.6929212236233924Epoch 16/800, Training Loss: 0.692753491063204, Validation Loss: 0.6929247403597005Epoch 17/800, Training Loss: 0.692753697585868, Validation Loss: 0.6929276643826408Epoch 18/800, Training Loss: 0.6927539057573722, Validation Loss: 0.6929300872081098Epoch 19/800, Training Loss: 0.6927541010627152, Validation Loss: 0.6929320893055061Epoch 20/800, Training Loss: 0.6927542763902463, Validation Loss: 0.6929337402078927Epoch 21/800, Training Loss: 0.6927544290926099, Validation Loss: 0.6929350992416489Epoch 22/800, Training Loss: 0.692754559114249, Validation Loss: 0.6929362165483759Epoch 23/800, Training Loss: 0.6927546678137976, Validation Loss: 0.6929371342031955Epoch 24/800, Training Loss: 0.6927547572366719, Validation Loss: 0.6929378873169906Epoch 25/800, Training Loss: 0.69275482967713, Validation Loss: 0.692938505062435Epoch 26/800, Training Loss: 0.692754887424689, Validation Loss: 0.6929390115957298Epoch 27/800, Training Loss: 0.6927549326263545, Validation Loss: 0.6929394268651752Epoch 28/800, Training Loss: 0.6927549672203386, Validation Loss: 0.6929397673086695Epoch 29/800, Training Loss: 0.6927549929127347, Validation Loss: 0.6929400464480826Epoch 30/800, Training Loss: 0.6927550111790024, Validation Loss: 0.692940275391133Epoch 31/800, Training Loss: 0.6927550232788607, Validation Loss: 0.692940463252216Epoch 32/800, Training Loss: 0.69275503027759, Validation Loss: 0.692940617503379Epoch 33/800, Training Loss: 0.6927550330694987, Validation Loss: 0.6929407442658042Epoch 34/800, Training Loss: 0.6927550324011486, Validation Loss: 0.6929408485510791Epoch 35/800, Training Loss: 0.6927550288930261, Validation Loss: 0.6929409344603825Epoch 36/800, Training Loss: 0.6927550230590478, Validation Loss: 0.6929410053485768Epoch 37/800, Training Loss: 0.6927550153236781, Validation Loss: 0.6929410639591672Epoch 38/800, Training Loss: 0.6927550060367302, Validation Loss: 0.6929411125351431Epoch 39/800, Training Loss: 0.6927549954859715, Validation Loss: 0.6929411529099195Epoch 40/800, Training Loss: 0.6927549839077838, Validation Loss: 0.692941186581882Epoch 41/800, Training Loss: 0.6927549714961025, Validation Loss: 0.6929412147754478Epoch 42/800, Training Loss: 0.6927549584098823, Validation Loss: 0.6929412384910518Epoch 43/800, Training Loss: 0.6927549447792897, Validation Loss: 0.6929412585460558Epoch 44/800, Training Loss: 0.6927549307108439, Validation Loss: 0.6929412756082023Epoch 45/800, Training Loss: 0.6927549162916511, Validation Loss: 0.6929412902229763Epoch 46/800, Training Loss: 0.6927549015929009, Validation Loss: 0.6929413028359717Epoch 47/800, Training Loss: 0.6927548866727393, Validation Loss: 0.6929413138111731Epoch 48/800, Training Loss: 0.692754871578635, Validation Loss: 0.6929413234458965Epoch 49/800, Training Loss: 0.6927548563493107, Validation Loss: 0.6929413319829952Epoch 50/800, Training Loss: 0.6927548410163386, Validation Loss: 0.6929413396208376Epoch 51/800, Training Loss: 0.6927548256054396, Validation Loss: 0.6929413465214528Epoch 52/800, Training Loss: 0.6927548101375505, Validation Loss: 0.6929413528171889Epoch 53/800, Training Loss: 0.6927547946296964, Validation Loss: 0.6929413586161502Epoch 54/800, Training Loss: 0.6927547790957057, Validation Loss: 0.69294136400664Epoch 55/800, Training Loss: 0.6927547635467962, Validation Loss: 0.6929413690607855Epoch 56/800, Training Loss: 0.6927547479920466, Validation Loss: 0.6929413738375054Epoch 57/800, Training Loss: 0.6927547324387946, Validation Loss: 0.6929413783849275Epoch 58/800, Training Loss: 0.69275471689295, Validation Loss: 0.6929413827423672Epoch 59/800, Training Loss: 0.6927547013592602, Validation Loss: 0.69294138694194Epoch 60/800, Training Loss: 0.6927546858415199, Validation Loss: 0.6929413910098824Epoch 61/800, Training Loss: 0.6927546703427467, Validation Loss: 0.692941394967629Epoch 62/800, Training Loss: 0.6927546548653256, Validation Loss: 0.6929413988326906Epoch 63/800, Training Loss: 0.6927546394111221, Validation Loss: 0.6929414026193733Epoch 64/800, Training Loss: 0.692754623981579, Validation Loss: 0.6929414063393673Epoch 65/800, Training Loss: 0.6927546085777925, Validation Loss: 0.692941410002223Epoch 66/800, Training Loss: 0.6927545932005778, Validation Loss: 0.6929414136157451Epoch 67/800, Training Loss: 0.6927545778505192, Validation Loss: 0.6929414171863112Epoch 68/800, Training Loss: 0.6927545625280127, Validation Loss: 0.6929414207191338Epoch 69/800, Training Loss: 0.6927545472333037, Validation Loss: 0.6929414242184717Epoch 70/800, Training Loss: 0.6927545319665077, Validation Loss: 0.6929414276878061Epoch 71/800, Training Loss: 0.6927545167276414, Validation Loss: 0.6929414311299829Epoch 72/800, Training Loss: 0.6927545015166379, Validation Loss: 0.6929414345473284Epoch 73/800, Training Loss: 0.6927544863333608, Validation Loss: 0.6929414379417433Epoch 74/800, Training Loss: 0.6927544711776197, Validation Loss: 0.6929414413147817Epoch 75/800, Training Loss: 0.692754456049179, Validation Loss: 0.6929414446677137Epoch 76/800, Training Loss: 0.6927544409477666, Validation Loss: 0.692941448001579Epoch 77/800, Training Loss: 0.6927544258730791, Validation Loss: 0.6929414513172246Epoch 78/800, Training Loss: 0.6927544108247902, Validation Loss: 0.6929414546153441Epoch 79/800, Training Loss: 0.6927543958025514, Validation Loss: 0.6929414578965045Epoch 80/800, Training Loss: 0.6927543808060027, Validation Loss: 0.6929414611611672Epoch 81/800, Training Loss: 0.6927543658347657, Validation Loss: 0.6929414644097099Epoch 82/800, Training Loss: 0.692754350888455, Validation Loss: 0.6929414676424389Epoch 83/800, Training Loss: 0.692754335966675, Validation Loss: 0.6929414708596047Epoch 84/800, Training Loss: 0.6927543210690225, Validation Loss: 0.6929414740614095Epoch 85/800, Training Loss: 0.692754306195091, Validation Loss: 0.6929414772480179Epoch 86/800, Training Loss: 0.6927542913444662, Validation Loss: 0.6929414804195615Epoch 87/800, Training Loss: 0.6927542765167349, Validation Loss: 0.6929414835761463Epoch 88/800, Training Loss: 0.6927542617114734, Validation Loss: 0.6929414867178567Epoch 89/800, Training Loss: 0.6927542469282625, Validation Loss: 0.6929414898447587Epoch 90/800, Training Loss: 0.6927542321666762, Validation Loss: 0.6929414929569033Epoch 91/800, Training Loss: 0.6927542174262907, Validation Loss: 0.69294149605433Epoch 92/800, Training Loss: 0.6927542027066768, Validation Loss: 0.6929414991370659Epoch 93/800, Training Loss: 0.692754188007408, Validation Loss: 0.6929415022051313Epoch 94/800, Training Loss: 0.6927541733280531, Validation Loss: 0.692941505258538Epoch 95/800, Training Loss: 0.6927541586681836, Validation Loss: 0.6929415082972906Epoch 96/800, Training Loss: 0.6927541440273679, Validation Loss: 0.6929415113213906Epoch 97/800, Training Loss: 0.692754129405174, Validation Loss: 0.6929415143308327Epoch 98/800, Training Loss: 0.6927541148011704, Validation Loss: 0.6929415173256077Epoch 99/800, Training Loss: 0.692754100214922, Validation Loss: 0.6929415203057041Epoch 100/800, Training Loss: 0.6927540856459979, Validation Loss: 0.6929415232711049Epoch 101/800, Training Loss: 0.6927540710939617, Validation Loss: 0.6929415262217921Epoch 102/800, Training Loss: 0.692754056558381, Validation Loss: 0.6929415291577438Epoch 103/800, Training Loss: 0.6927540420388187, Validation Loss: 0.6929415320789366Epoch 104/800, Training Loss: 0.6927540275348397, Validation Loss: 0.6929415349853436Epoch 105/800, Training Loss: 0.692754013046007, Validation Loss: 0.6929415378769359Epoch 106/800, Training Loss: 0.6927539985718838, Validation Loss: 0.6929415407536832Epoch 107/800, Training Loss: 0.6927539841120315, Validation Loss: 0.6929415436155522Epoch 108/800, Training Loss: 0.6927539696660132, Validation Loss: 0.692941546462509Epoch 109/800, Training Loss: 0.6927539552333881, Validation Loss: 0.6929415492945162Epoch 110/800, Training Loss: 0.6927539408137161, Validation Loss: 0.6929415521115361Epoch 111/800, Training Loss: 0.6927539264065568, Validation Loss: 0.6929415549135266Epoch 112/800, Training Loss: 0.6927539120114675, Validation Loss: 0.6929415577004464Epoch 113/800, Training Loss: 0.692753897628005, Validation Loss: 0.6929415604722516Epoch 114/800, Training Loss: 0.6927538832557265, Validation Loss: 0.6929415632288956Epoch 115/800, Training Loss: 0.6927538688941867, Validation Loss: 0.6929415659703311Epoch 116/800, Training Loss: 0.6927538545429398, Validation Loss: 0.6929415686965079Epoch 117/800, Training Loss: 0.6927538402015381, Validation Loss: 0.6929415714073743Epoch 118/800, Training Loss: 0.6927538258695329, Validation Loss: 0.692941574102877Epoch 119/800, Training Loss: 0.6927538115464764, Validation Loss: 0.6929415767829604Epoch 120/800, Training Loss: 0.6927537972319167, Validation Loss: 0.692941579447568Epoch 121/800, Training Loss: 0.6927537829254011, Validation Loss: 0.6929415820966395Epoch 122/800, Training Loss: 0.6927537686264765, Validation Loss: 0.692941584730115Epoch 123/800, Training Loss: 0.6927537543346879, Validation Loss: 0.6929415873479308Epoch 124/800, Training Loss: 0.6927537400495796, Validation Loss: 0.6929415899500215Epoch 125/800, Training Loss: 0.6927537257706919, Validation Loss: 0.6929415925363203Epoch 126/800, Training Loss: 0.6927537114975667, Validation Loss: 0.692941595106758Epoch 127/800, Training Loss: 0.6927536972297416, Validation Loss: 0.6929415976612637Epoch 128/800, Training Loss: 0.6927536829667533, Validation Loss: 0.692941600199764Epoch 129/800, Training Loss: 0.6927536687081375, Validation Loss: 0.6929416027221834Epoch 130/800, Training Loss: 0.6927536544534278, Validation Loss: 0.6929416052284446Epoch 131/800, Training Loss: 0.6927536402021544, Validation Loss: 0.6929416077184676Epoch 132/800, Training Loss: 0.6927536259538474, Validation Loss: 0.6929416101921706Epoch 133/800, Training Loss: 0.6927536117080344, Validation Loss: 0.6929416126494691Epoch 134/800, Training Loss: 0.6927535974642396, Validation Loss: 0.6929416150902774Epoch 135/800, Training Loss: 0.6927535832219877, Validation Loss: 0.6929416175145064Epoch 136/800, Training Loss: 0.6927535689807981, Validation Loss: 0.6929416199220649Epoch 137/800, Training Loss: 0.6927535547401913, Validation Loss: 0.6929416223128598Epoch 138/800, Training Loss: 0.6927535404996796, Validation Loss: 0.6929416246867951Epoch 139/800, Training Loss: 0.6927535262587812, Validation Loss: 0.6929416270437728Epoch 140/800, Training Loss: 0.6927535120170054, Validation Loss: 0.6929416293836921Epoch 141/800, Training Loss: 0.6927534977738616, Validation Loss: 0.6929416317064494Epoch 142/800, Training Loss: 0.6927534835288561, Validation Loss: 0.6929416340119391Epoch 143/800, Training Loss: 0.6927534692814913, Validation Loss: 0.6929416363000535Epoch 144/800, Training Loss: 0.6927534550312701, Validation Loss: 0.6929416385706807Epoch 145/800, Training Loss: 0.6927534407776873, Validation Loss: 0.692941640823707Epoch 146/800, Training Loss: 0.6927534265202404, Validation Loss: 0.6929416430590171Epoch 147/800, Training Loss: 0.6927534122584205, Validation Loss: 0.6929416452764905Epoch 148/800, Training Loss: 0.6927533979917171, Validation Loss: 0.6929416474760065Epoch 149/800, Training Loss: 0.6927533837196161, Validation Loss: 0.6929416496574395Epoch 150/800, Training Loss: 0.6927533694415989, Validation Loss: 0.6929416518206624Epoch 151/800, Training Loss: 0.6927533551571474, Validation Loss: 0.6929416539655444Epoch 152/800, Training Loss: 0.6927533408657348, Validation Loss: 0.6929416560919516Epoch 153/800, Training Loss: 0.692753326566837, Validation Loss: 0.6929416581997476Epoch 154/800, Training Loss: 0.6927533122599199, Validation Loss: 0.6929416602887939Epoch 155/800, Training Loss: 0.6927532979444517, Validation Loss: 0.692941662358946Epoch 156/800, Training Loss: 0.6927532836198921, Validation Loss: 0.6929416644100583Epoch 157/800, Training Loss: 0.6927532692856999, Validation Loss: 0.6929416664419823Epoch 158/800, Training Loss: 0.6927532549413299, Validation Loss: 0.692941668454565Epoch 159/800, Training Loss: 0.6927532405862322, Validation Loss: 0.6929416704476509Epoch 160/800, Training Loss: 0.6927532262198524, Validation Loss: 0.6929416724210801Epoch 161/800, Training Loss: 0.6927532118416344, Validation Loss: 0.6929416743746903Epoch 162/800, Training Loss: 0.6927531974510133, Validation Loss: 0.6929416763083157Epoch 163/800, Training Loss: 0.6927531830474252, Validation Loss: 0.6929416782217858Epoch 164/800, Training Loss: 0.692753168630299, Validation Loss: 0.6929416801149276Epoch 165/800, Training Loss: 0.6927531541990584, Validation Loss: 0.6929416819875645Epoch 166/800, Training Loss: 0.6927531397531237, Validation Loss: 0.6929416838395144Epoch 167/800, Training Loss: 0.6927531252919116, Validation Loss: 0.6929416856705934Epoch 168/800, Training Loss: 0.6927531108148316, Validation Loss: 0.6929416874806125Epoch 169/800, Training Loss: 0.692753096321291, Validation Loss: 0.6929416892693789Epoch 170/800, Training Loss: 0.6927530818106882, Validation Loss: 0.6929416910366968Epoch 171/800, Training Loss: 0.692753067282422, Validation Loss: 0.6929416927823648Epoch 172/800, Training Loss: 0.6927530527358797, Validation Loss: 0.6929416945061779Epoch 173/800, Training Loss: 0.6927530381704496, Validation Loss: 0.6929416962079277Epoch 174/800, Training Loss: 0.6927530235855095, Validation Loss: 0.6929416978874Epoch 175/800, Training Loss: 0.6927530089804341, Validation Loss: 0.6929416995443767Epoch 176/800, Training Loss: 0.6927529943545929, Validation Loss: 0.6929417011786361Epoch 177/800, Training Loss: 0.6927529797073471, Validation Loss: 0.6929417027899506Epoch 178/800, Training Loss: 0.6927529650380544, Validation Loss: 0.6929417043780889Epoch 179/800, Training Loss: 0.6927529503460663, Validation Loss: 0.6929417059428145Epoch 180/800, Training Loss: 0.692752935630727, Validation Loss: 0.6929417074838862Epoch 181/800, Training Loss: 0.6927529208913743, Validation Loss: 0.6929417090010576Epoch 182/800, Training Loss: 0.6927529061273416, Validation Loss: 0.6929417104940775Epoch 183/800, Training Loss: 0.6927528913379545, Validation Loss: 0.6929417119626902Epoch 184/800, Training Loss: 0.6927528765225317, Validation Loss: 0.6929417134066339Epoch 185/800, Training Loss: 0.6927528616803843, Validation Loss: 0.6929417148256417Epoch 186/800, Training Loss: 0.6927528468108197, Validation Loss: 0.692941716219441Epoch 187/800, Training Loss: 0.6927528319131339, Validation Loss: 0.6929417175877554Epoch 188/800, Training Loss: 0.6927528169866193, Validation Loss: 0.6929417189303005Epoch 189/800, Training Loss: 0.6927528020305586, Validation Loss: 0.6929417202467878Epoch 190/800, Training Loss: 0.6927527870442287, Validation Loss: 0.6929417215369218Epoch 191/800, Training Loss: 0.6927527720268977, Validation Loss: 0.6929417228004023Epoch 192/800, Training Loss: 0.6927527569778265, Validation Loss: 0.6929417240369231Epoch 193/800, Training Loss: 0.6927527418962673, Validation Loss: 0.6929417252461701Epoch 194/800, Training Loss: 0.6927527267814665, Validation Loss: 0.6929417264278246Epoch 195/800, Training Loss: 0.6927527116326577, Validation Loss: 0.6929417275815607Epoch 196/800, Training Loss: 0.6927526964490708, Validation Loss: 0.6929417287070463Epoch 197/800, Training Loss: 0.6927526812299245, Validation Loss: 0.6929417298039424Epoch 198/800, Training Loss: 0.6927526659744276, Validation Loss: 0.6929417308719039Epoch 199/800, Training Loss: 0.6927526506817846, Validation Loss: 0.6929417319105773Epoch 200/800, Training Loss: 0.6927526353511855, Validation Loss: 0.6929417329196034Epoch 201/800, Training Loss: 0.692752619981815, Validation Loss: 0.6929417338986154Epoch 202/800, Training Loss: 0.6927526045728448, Validation Loss: 0.6929417348472391Epoch 203/800, Training Loss: 0.6927525891234412, Validation Loss: 0.6929417357650921Epoch 204/800, Training Loss: 0.6927525736327552, Validation Loss: 0.6929417366517858Epoch 205/800, Training Loss: 0.6927525580999319, Validation Loss: 0.692941737506923Epoch 206/800, Training Loss: 0.6927525425241059, Validation Loss: 0.6929417383300986Epoch 207/800, Training Loss: 0.6927525269043984, Validation Loss: 0.6929417391208987Epoch 208/800, Training Loss: 0.6927525112399227, Validation Loss: 0.6929417398789022Epoch 209/800, Training Loss: 0.6927524955297806, Validation Loss: 0.6929417406036793Epoch 210/800, Training Loss: 0.692752479773061, Validation Loss: 0.6929417412947908Epoch 211/800, Training Loss: 0.692752463968844, Validation Loss: 0.6929417419517898Epoch 212/800, Training Loss: 0.6927524481161965, Validation Loss: 0.6929417425742199Epoch 213/800, Training Loss: 0.6927524322141732, Validation Loss: 0.6929417431616148Epoch 214/800, Training Loss: 0.692752416261819, Validation Loss: 0.6929417437135003Epoch 215/800, Training Loss: 0.6927524002581633, Validation Loss: 0.6929417442293917Epoch 216/800, Training Loss: 0.6927523842022258, Validation Loss: 0.6929417447087947Epoch 217/800, Training Loss: 0.6927523680930125, Validation Loss: 0.6929417451512048Epoch 218/800, Training Loss: 0.6927523519295145, Validation Loss: 0.6929417455561082Epoch 219/800, Training Loss: 0.6927523357107114, Validation Loss: 0.6929417459229794Epoch 220/800, Training Loss: 0.6927523194355703, Validation Loss: 0.6929417462512835Epoch 221/800, Training Loss: 0.6927523031030424, Validation Loss: 0.692941746540474Epoch 222/800, Training Loss: 0.6927522867120653, Validation Loss: 0.6929417467899939Epoch 223/800, Training Loss: 0.6927522702615625, Validation Loss: 0.6929417469992749Epoch 224/800, Training Loss: 0.6927522537504415, Validation Loss: 0.6929417471677363Epoch 225/800, Training Loss: 0.6927522371775964, Validation Loss: 0.6929417472947859Epoch 226/800, Training Loss: 0.6927522205419064, Validation Loss: 0.692941747379821Epoch 227/800, Training Loss: 0.6927522038422328, Validation Loss: 0.6929417474222241Epoch 228/800, Training Loss: 0.6927521870774229, Validation Loss: 0.6929417474213678Epoch 229/800, Training Loss: 0.6927521702463058, Validation Loss: 0.6929417473766095Epoch 230/800, Training Loss: 0.692752153347697, Validation Loss: 0.6929417472872947Epoch 231/800, Training Loss: 0.6927521363803915, Validation Loss: 0.6929417471527554Epoch 232/800, Training Loss: 0.6927521193431697, Validation Loss: 0.6929417469723094Epoch 233/800, Training Loss: 0.6927521022347927, Validation Loss: 0.6929417467452614Epoch 234/800, Training Loss: 0.6927520850540043, Validation Loss: 0.6929417464709017Epoch 235/800, Training Loss: 0.6927520677995301, Validation Loss: 0.6929417461485043Epoch 236/800, Training Loss: 0.692752050470076, Validation Loss: 0.692941745777331Epoch 237/800, Training Loss: 0.6927520330643296, Validation Loss: 0.6929417453566259Epoch 238/800, Training Loss: 0.6927520155809587, Validation Loss: 0.6929417448856193Epoch 239/800, Training Loss: 0.6927519980186111, Validation Loss: 0.6929417443635243Epoch 240/800, Training Loss: 0.692751980375913, Validation Loss: 0.6929417437895378Epoch 241/800, Training Loss: 0.6927519626514737, Validation Loss: 0.6929417431628406Epoch 242/800, Training Loss: 0.692751944843875, Validation Loss: 0.6929417424825963Epoch 243/800, Training Loss: 0.6927519269516832, Validation Loss: 0.6929417417479512Epoch 244/800, Training Loss: 0.6927519089734376, Validation Loss: 0.6929417409580325Epoch 245/800, Training Loss: 0.6927518909076577, Validation Loss: 0.6929417401119519Epoch 246/800, Training Loss: 0.6927518727528398, Validation Loss: 0.6929417392087986Epoch 247/800, Training Loss: 0.6927518545074551, Validation Loss: 0.6929417382476463Epoch 248/800, Training Loss: 0.6927518361699518, Validation Loss: 0.692941737227547Epoch 249/800, Training Loss: 0.6927518177387532, Validation Loss: 0.6929417361475343Epoch 250/800, Training Loss: 0.6927517992122587, Validation Loss: 0.6929417350066187Epoch 251/800, Training Loss: 0.6927517805888388, Validation Loss: 0.6929417338037935Epoch 252/800, Training Loss: 0.6927517618668401, Validation Loss: 0.6929417325380278Epoch 253/800, Training Loss: 0.6927517430445832, Validation Loss: 0.6929417312082695Epoch 254/800, Training Loss: 0.6927517241203593, Validation Loss: 0.6929417298134447Epoch 255/800, Training Loss: 0.6927517050924331, Validation Loss: 0.6929417283524564Epoch 256/800, Training Loss: 0.6927516859590395, Validation Loss: 0.6929417268241832Epoch 257/800, Training Loss: 0.6927516667183863, Validation Loss: 0.6929417252274809Epoch 258/800, Training Loss: 0.6927516473686489, Validation Loss: 0.6929417235611802Epoch 259/800, Training Loss: 0.6927516279079736, Validation Loss: 0.6929417218240854Epoch 260/800, Training Loss: 0.6927516083344746, Validation Loss: 0.6929417200149783Epoch 261/800, Training Loss: 0.6927515886462375, Validation Loss: 0.6929417181326112Epoch 262/800, Training Loss: 0.6927515688413111, Validation Loss: 0.6929417161757103Epoch 263/800, Training Loss: 0.692751548917713, Validation Loss: 0.6929417141429747Epoch 264/800, Training Loss: 0.6927515288734273, Validation Loss: 0.6929417120330749Epoch 265/800, Training Loss: 0.6927515087064012, Validation Loss: 0.6929417098446516Epoch 266/800, Training Loss: 0.6927514884145488, Validation Loss: 0.6929417075763171Epoch 267/800, Training Loss: 0.6927514679957484, Validation Loss: 0.6929417052266527Epoch 268/800, Training Loss: 0.6927514474478381, Validation Loss: 0.692941702794208Epoch 269/800, Training Loss: 0.6927514267686202, Validation Loss: 0.6929417002775015Epoch 270/800, Training Loss: 0.6927514059558577, Validation Loss: 0.6929416976750182Epoch 271/800, Training Loss: 0.6927513850072743, Validation Loss: 0.6929416949852102Epoch 272/800, Training Loss: 0.692751363920552, Validation Loss: 0.692941692206495Epoch 273/800, Training Loss: 0.692751342693334, Validation Loss: 0.6929416893372556Epoch 274/800, Training Loss: 0.6927513213232184, Validation Loss: 0.6929416863758373Epoch 275/800, Training Loss: 0.6927512998077586, Validation Loss: 0.6929416833205493Epoch 276/800, Training Loss: 0.6927512781444684, Validation Loss: 0.6929416801696642Epoch 277/800, Training Loss: 0.6927512563308126, Validation Loss: 0.6929416769214136Epoch 278/800, Training Loss: 0.69275123436421, Validation Loss: 0.6929416735739903Epoch 279/800, Training Loss: 0.6927512122420327, Validation Loss: 0.6929416701255461Epoch 280/800, Training Loss: 0.692751189961603, Validation Loss: 0.692941666574191Epoch 281/800, Training Loss: 0.6927511675201954, Validation Loss: 0.6929416629179931Epoch 282/800, Training Loss: 0.6927511449150316, Validation Loss: 0.6929416591549742Epoch 283/800, Training Loss: 0.6927511221432812, Validation Loss: 0.6929416552831131Epoch 284/800, Training Loss: 0.692751099202061, Validation Loss: 0.6929416513003411Epoch 285/800, Training Loss: 0.6927510760884326, Validation Loss: 0.6929416472045423Epoch 286/800, Training Loss: 0.6927510527994042, Validation Loss: 0.6929416429935531Epoch 287/800, Training Loss: 0.6927510293319228, Validation Loss: 0.6929416386651578Epoch 288/800, Training Loss: 0.6927510056828792, Validation Loss: 0.6929416342170913Epoch 289/800, Training Loss: 0.6927509818491039, Validation Loss: 0.6929416296470354Epoch 290/800, Training Loss: 0.6927509578273647, Validation Loss: 0.6929416249526174Epoch 291/800, Training Loss: 0.6927509336143689, Validation Loss: 0.6929416201314101Epoch 292/800, Training Loss: 0.6927509092067567, Validation Loss: 0.6929416151809284Epoch 293/800, Training Loss: 0.6927508846011039, Validation Loss: 0.6929416100986298Epoch 294/800, Training Loss: 0.6927508597939185, Validation Loss: 0.6929416048819117Epoch 295/800, Training Loss: 0.6927508347816382, Validation Loss: 0.6929415995281099Epoch 296/800, Training Loss: 0.6927508095606301, Validation Loss: 0.6929415940344962Epoch 297/800, Training Loss: 0.6927507841271903, Validation Loss: 0.6929415883982796Epoch 298/800, Training Loss: 0.6927507584775373, Validation Loss: 0.6929415826166008Epoch 299/800, Training Loss: 0.6927507326078148, Validation Loss: 0.6929415766865332Epoch 300/800, Training Loss: 0.6927507065140897, Validation Loss: 0.6929415706050792Epoch 301/800, Training Loss: 0.6927506801923458, Validation Loss: 0.6929415643691698Epoch 302/800, Training Loss: 0.6927506536384859, Validation Loss: 0.6929415579756617Epoch 303/800, Training Loss: 0.6927506268483296, Validation Loss: 0.6929415514213354Epoch 304/800, Training Loss: 0.6927505998176087, Validation Loss: 0.6929415447028942Epoch 305/800, Training Loss: 0.6927505725419663, Validation Loss: 0.692941537816959Epoch 306/800, Training Loss: 0.6927505450169563, Validation Loss: 0.692941530760071Epoch 307/800, Training Loss: 0.6927505172380353, Validation Loss: 0.6929415235286841Epoch 308/800, Training Loss: 0.6927504892005694, Validation Loss: 0.692941516119167Epoch 309/800, Training Loss: 0.6927504608998245, Validation Loss: 0.6929415085277968Epoch 310/800, Training Loss: 0.6927504323309643, Validation Loss: 0.6929415007507613Epoch 311/800, Training Loss: 0.6927504034890513, Validation Loss: 0.6929414927841516Epoch 312/800, Training Loss: 0.6927503743690417, Validation Loss: 0.6929414846239611Epoch 313/800, Training Loss: 0.6927503449657841, Validation Loss: 0.6929414762660848Epoch 314/800, Training Loss: 0.6927503152740127, Validation Loss: 0.6929414677063142Epoch 315/800, Training Loss: 0.6927502852883495, Validation Loss: 0.6929414589403345Epoch 316/800, Training Loss: 0.6927502550032985, Validation Loss: 0.6929414499637226Epoch 317/800, Training Loss: 0.6927502244132435, Validation Loss: 0.6929414407719429Epoch 318/800, Training Loss: 0.6927501935124452, Validation Loss: 0.6929414313603453Epoch 319/800, Training Loss: 0.6927501622950347, Validation Loss: 0.6929414217241607Epoch 320/800, Training Loss: 0.692750130755016, Validation Loss: 0.6929414118584983Epoch 321/800, Training Loss: 0.6927500988862558, Validation Loss: 0.6929414017583417Epoch 322/800, Training Loss: 0.6927500666824842, Validation Loss: 0.6929413914185452Epoch 323/800, Training Loss: 0.6927500341372932, Validation Loss: 0.6929413808338303Epoch 324/800, Training Loss: 0.6927500012441239, Validation Loss: 0.6929413699987821Epoch 325/800, Training Loss: 0.6927499679962726, Validation Loss: 0.692941358907844Epoch 326/800, Training Loss: 0.6927499343868819, Validation Loss: 0.6929413475553156Epoch 327/800, Training Loss: 0.6927499004089342, Validation Loss: 0.6929413359353453Epoch 328/800, Training Loss: 0.6927498660552519, Validation Loss: 0.6929413240419285Epoch 329/800, Training Loss: 0.6927498313184921, Validation Loss: 0.6929413118689036Epoch 330/800, Training Loss: 0.6927497961911395, Validation Loss: 0.6929412994099432Epoch 331/800, Training Loss: 0.6927497606655019, Validation Loss: 0.6929412866585544Epoch 332/800, Training Loss: 0.6927497247337076, Validation Loss: 0.6929412736080688Epoch 333/800, Training Loss: 0.6927496883876992, Validation Loss: 0.6929412602516405Epoch 334/800, Training Loss: 0.6927496516192291, Validation Loss: 0.69294124658224Epoch 335/800, Training Loss: 0.6927496144198517, Validation Loss: 0.6929412325926477Epoch 336/800, Training Loss: 0.6927495767809174, Validation Loss: 0.692941218275449Epoch 337/800, Training Loss: 0.692749538693571, Validation Loss: 0.6929412036230269Epoch 338/800, Training Loss: 0.6927495001487454, Validation Loss: 0.6929411886275582Epoch 339/800, Training Loss: 0.6927494611371465, Validation Loss: 0.692941173281004Epoch 340/800, Training Loss: 0.6927494216492595, Validation Loss: 0.6929411575751047Epoch 341/800, Training Loss: 0.6927493816753334, Validation Loss: 0.6929411415013736Epoch 342/800, Training Loss: 0.6927493412053785, Validation Loss: 0.6929411250510892Epoch 343/800, Training Loss: 0.6927493002291569, Validation Loss: 0.6929411082152865Epoch 344/800, Training Loss: 0.6927492587361748, Validation Loss: 0.692941090984751Epoch 345/800, Training Loss: 0.6927492167156787, Validation Loss: 0.6929410733500101Epoch 346/800, Training Loss: 0.6927491741566427, Validation Loss: 0.6929410553013241Epoch 347/800, Training Loss: 0.6927491310477654, Validation Loss: 0.6929410368286792Epoch 348/800, Training Loss: 0.6927490873774547, Validation Loss: 0.6929410179217765Epoch 349/800, Training Loss: 0.692749043133829, Validation Loss: 0.6929409985700257Epoch 350/800, Training Loss: 0.6927489983046955, Validation Loss: 0.6929409787625324Epoch 351/800, Training Loss: 0.6927489528775534, Validation Loss: 0.6929409584880895Epoch 352/800, Training Loss: 0.6927489068395761, Validation Loss: 0.6929409377351692Epoch 353/800, Training Loss: 0.6927488601776032, Validation Loss: 0.6929409164919076Epoch 354/800, Training Loss: 0.6927488128781308, Validation Loss: 0.6929408947460979Epoch 355/800, Training Loss: 0.6927487649273039, Validation Loss: 0.6929408724851761Epoch 356/800, Training Loss: 0.692748716310898, Validation Loss: 0.6929408496962114Epoch 357/800, Training Loss: 0.6927486670143141, Validation Loss: 0.6929408263658923Epoch 358/800, Training Loss: 0.692748617022563, Validation Loss: 0.692940802480514Epoch 359/800, Training Loss: 0.6927485663202553, Validation Loss: 0.6929407780259652Epoch 360/800, Training Loss: 0.6927485148915856, Validation Loss: 0.6929407529877152Epoch 361/800, Training Loss: 0.6927484627203235, Validation Loss: 0.6929407273507978Epoch 362/800, Training Loss: 0.6927484097897973, Validation Loss: 0.6929407010997972Epoch 363/800, Training Loss: 0.6927483560828785, Validation Loss: 0.6929406742188339Epoch 364/800, Training Loss: 0.6927483015819698, Validation Loss: 0.6929406466915456Epoch 365/800, Training Loss: 0.6927482462689868, Validation Loss: 0.6929406185010727Epoch 366/800, Training Loss: 0.6927481901253454, Validation Loss: 0.6929405896300412Epoch 367/800, Training Loss: 0.6927481331319428, Validation Loss: 0.6929405600605427Epoch 368/800, Training Loss: 0.6927480752691394, Validation Loss: 0.6929405297741169Epoch 369/800, Training Loss: 0.6927480165167436, Validation Loss: 0.6929404987517319Epoch 370/800, Training Loss: 0.6927479568539943, Validation Loss: 0.692940466973764Epoch 371/800, Training Loss: 0.6927478962595346, Validation Loss: 0.6929404344199758Epoch 372/800, Training Loss: 0.6927478347114013, Validation Loss: 0.6929404010694951Epoch 373/800, Training Loss: 0.692747772186997, Validation Loss: 0.6929403669007898Epoch 374/800, Training Loss: 0.6927477086630717, Validation Loss: 0.6929403318916487Epoch 375/800, Training Loss: 0.6927476441156991, Validation Loss: 0.6929402960191513Epoch 376/800, Training Loss: 0.6927475785202535, Validation Loss: 0.6929402592596444Epoch 377/800, Training Loss: 0.6927475118513866, Validation Loss: 0.6929402215887162Epoch 378/800, Training Loss: 0.6927474440830023, Validation Loss: 0.6929401829811658Epoch 379/800, Training Loss: 0.6927473751882257, Validation Loss: 0.6929401434109762Epoch 380/800, Training Loss: 0.692747305139384, Validation Loss: 0.6929401028512825Epoch 381/800, Training Loss: 0.6927472339079707, Validation Loss: 0.6929400612743389Epoch 382/800, Training Loss: 0.6927471614646189, Validation Loss: 0.6929400186514888Epoch 383/800, Training Loss: 0.6927470877790742, Validation Loss: 0.6929399749531276Epoch 384/800, Training Loss: 0.692747012820152, Validation Loss: 0.6929399301486673Epoch 385/800, Training Loss: 0.6927469365557183, Validation Loss: 0.6929398842064988Epoch 386/800, Training Loss: 0.6927468589526418, Validation Loss: 0.6929398370939537Epoch 387/800, Training Loss: 0.6927467799767649, Validation Loss: 0.692939788777261Epoch 388/800, Training Loss: 0.6927466995928658, Validation Loss: 0.6929397392215073Epoch 389/800, Training Loss: 0.6927466177646138, Validation Loss: 0.6929396883905896Epoch 390/800, Training Loss: 0.6927465344545357, Validation Loss: 0.692939636247169Epoch 391/800, Training Loss: 0.6927464496239664, Validation Loss: 0.6929395827526244Epoch 392/800, Training Loss: 0.6927463632330066, Validation Loss: 0.6929395278669972Epoch 393/800, Training Loss: 0.6927462752404755, Validation Loss: 0.6929394715489423Epoch 394/800, Training Loss: 0.6927461856038664, Validation Loss: 0.69293941375567Epoch 395/800, Training Loss: 0.6927460942792856, Validation Loss: 0.6929393544428885Epoch 396/800, Training Loss: 0.6927460012214116, Validation Loss: 0.6929392935647434Epoch 397/800, Training Loss: 0.6927459063834287, Validation Loss: 0.6929392310737519Epoch 398/800, Training Loss: 0.6927458097169776, Validation Loss: 0.6929391669207408Epoch 399/800, Training Loss: 0.6927457111720888, Validation Loss: 0.6929391010547707Epoch 400/800, Training Loss: 0.6927456106971218, Validation Loss: 0.6929390334230682Epoch 401/800, Training Loss: 0.6927455082386994, Validation Loss: 0.692938963970946Epoch 402/800, Training Loss: 0.6927454037416354, Validation Loss: 0.6929388926417251Epoch 403/800, Training Loss: 0.6927452971488662, Validation Loss: 0.6929388193766493Epoch 404/800, Training Loss: 0.6927451884013726, Validation Loss: 0.6929387441147975Epoch 405/800, Training Loss: 0.692745077438099, Validation Loss: 0.6929386667929928Epoch 406/800, Training Loss: 0.6927449641958752, Validation Loss: 0.6929385873457045Epoch 407/800, Training Loss: 0.6927448486093234, Validation Loss: 0.6929385057049466Epoch 408/800, Training Loss: 0.6927447306107722, Validation Loss: 0.6929384218001735Epoch 409/800, Training Loss: 0.692744610130157, Validation Loss: 0.6929383355581646Epoch 410/800, Training Loss: 0.6927444870949233, Validation Loss: 0.6929382469029112Epoch 411/800, Training Loss: 0.6927443614299186, Validation Loss: 0.6929381557554917Epoch 412/800, Training Loss: 0.6927442330572855, Validation Loss: 0.6929380620339408Epoch 413/800, Training Loss: 0.6927441018963446, Validation Loss: 0.6929379656531169Epoch 414/800, Training Loss: 0.6927439678634753, Validation Loss: 0.6929378665245581Epoch 415/800, Training Loss: 0.692743830871983, Validation Loss: 0.6929377645563308Epoch 416/800, Training Loss: 0.692743690831973, Validation Loss: 0.6929376596528746Epoch 417/800, Training Loss: 0.6927435476502095, Validation Loss: 0.6929375517148348Epoch 418/800, Training Loss: 0.6927434012299595, Validation Loss: 0.6929374406388876Epoch 419/800, Training Loss: 0.6927432514708516, Validation Loss: 0.6929373263175587Epoch 420/800, Training Loss: 0.6927430982687031, Validation Loss: 0.6929372086390273Epoch 421/800, Training Loss: 0.6927429415153535, Validation Loss: 0.6929370874869241Epoch 422/800, Training Loss: 0.6927427810984845, Validation Loss: 0.6929369627401167Epoch 423/800, Training Loss: 0.6927426169014298, Validation Loss: 0.6929368342724816Epoch 424/800, Training Loss: 0.6927424488029752, Validation Loss: 0.6929367019526699Epoch 425/800, Training Loss: 0.6927422766771507, Validation Loss: 0.6929365656438519Epoch 426/800, Training Loss: 0.6927421003930129, Validation Loss: 0.6929364252034542Epoch 427/800, Training Loss: 0.6927419198144011, Validation Loss: 0.6929362804828798Epoch 428/800, Training Loss: 0.6927417347997059, Validation Loss: 0.6929361313272109Epoch 429/800, Training Loss: 0.6927415452016034, Validation Loss: 0.6929359775749011Epoch 430/800, Training Loss: 0.6927413508667808, Validation Loss: 0.692935819057442Epoch 431/800, Training Loss: 0.6927411516356525, Validation Loss: 0.692935655599016Epoch 432/800, Training Loss: 0.6927409473420568, Validation Loss: 0.6929354870161306Epoch 433/800, Training Loss: 0.6927407378129311, Validation Loss: 0.6929353131172253Epoch 434/800, Training Loss: 0.6927405228679803, Validation Loss: 0.6929351337022636Epoch 435/800, Training Loss: 0.6927403023193072, Validation Loss: 0.6929349485622945Epoch 436/800, Training Loss: 0.6927400759710476, Validation Loss: 0.6929347574789937Epoch 437/800, Training Loss: 0.6927398436189625, Validation Loss: 0.6929345602241704Epoch 438/800, Training Loss: 0.6927396050500119, Validation Loss: 0.6929343565592558Epoch 439/800, Training Loss: 0.6927393600419178, Validation Loss: 0.6929341462347486Epoch 440/800, Training Loss: 0.6927391083626768, Validation Loss: 0.692933928989635Epoch 441/800, Training Loss: 0.6927388497700684, Validation Loss: 0.6929337045507713Epoch 442/800, Training Loss: 0.6927385840111129, Validation Loss: 0.6929334726322294Epoch 443/800, Training Loss: 0.6927383108215167, Validation Loss: 0.6929332329345971Epoch 444/800, Training Loss: 0.6927380299250644, Validation Loss: 0.6929329851442426Epoch 445/800, Training Loss: 0.6927377410329887, Validation Loss: 0.6929327289325266Epoch 446/800, Training Loss: 0.6927374438432953, Validation Loss: 0.6929324639549668Epoch 447/800, Training Loss: 0.6927371380400436, Validation Loss: 0.6929321898503487Epoch 448/800, Training Loss: 0.6927368232925885, Validation Loss: 0.6929319062397828Epoch 449/800, Training Loss: 0.6927364992547687, Validation Loss: 0.6929316127256924Epoch 450/800, Training Loss: 0.6927361655640496, Validation Loss: 0.692931308890745Epoch 451/800, Training Loss: 0.6927358218406019, Validation Loss: 0.6929309942967052Epoch 452/800, Training Loss: 0.69273546768633, Validation Loss: 0.6929306684832196Epoch 453/800, Training Loss: 0.6927351026838325, Validation Loss: 0.6929303309665116Epoch 454/800, Training Loss: 0.6927347263952962, Validation Loss: 0.6929299812379973Epoch 455/800, Training Loss: 0.6927343383613128, Validation Loss: 0.6929296187627989Epoch 456/800, Training Loss: 0.6927339380996225, Validation Loss: 0.6929292429781632Epoch 457/800, Training Loss: 0.6927335251037684, Validation Loss: 0.6929288532917662Epoch 458/800, Training Loss: 0.692733098841663, Validation Loss: 0.6929284490799026Epoch 459/800, Training Loss: 0.6927326587540497, Validation Loss: 0.6929280296855451Epoch 460/800, Training Loss: 0.6927322042528701, Validation Loss: 0.6929275944162713Epoch 461/800, Training Loss: 0.6927317347194996, Validation Loss: 0.6929271425420347Epoch 462/800, Training Loss: 0.6927312495028808, Validation Loss: 0.6929266732927832Epoch 463/800, Training Loss: 0.692730747917502, Validation Loss: 0.6929261858558977Epoch 464/800, Training Loss: 0.6927302292412478, Validation Loss: 0.6929256793734488Epoch 465/800, Training Loss: 0.6927296927130855, Validation Loss: 0.6929251529392437Epoch 466/800, Training Loss: 0.6927291375305837, Validation Loss: 0.6929246055956583Epoch 467/800, Training Loss: 0.6927285628472515, Validation Loss: 0.692924036330228Epoch 468/800, Training Loss: 0.6927279677696729, Validation Loss: 0.6929234440719781Epoch 469/800, Training Loss: 0.6927273513544393, Validation Loss: 0.6929228276874746Epoch 470/800, Training Loss: 0.6927267126048279, Validation Loss: 0.6929221859765663Epoch 471/800, Training Loss: 0.6927260504672513, Validation Loss: 0.6929215176677972Epoch 472/800, Training Loss: 0.6927253638274082, Validation Loss: 0.6929208214134508Epoch 473/800, Training Loss: 0.6927246515061551, Validation Loss: 0.6929200957842064Epoch 474/800, Training Loss: 0.6927239122550353, Validation Loss: 0.6929193392633567Epoch 475/800, Training Loss: 0.6927231447514638, Validation Loss: 0.6929185502405625Epoch 476/800, Training Loss: 0.6927223475935178, Validation Loss: 0.6929177270050857Epoch 477/800, Training Loss: 0.6927215192943026, Validation Loss: 0.6929168677384716Epoch 478/800, Training Loss: 0.6927206582758693, Validation Loss: 0.692915970506604Epoch 479/800, Training Loss: 0.692719762862603, Validation Loss: 0.6929150332511Epoch 480/800, Training Loss: 0.6927188312740771, Validation Loss: 0.692914053779956Epoch 481/800, Training Loss: 0.692717861617297, Validation Loss: 0.6929130297573959Epoch 482/800, Training Loss: 0.6927168518782724, Validation Loss: 0.6929119586928214Epoch 483/800, Training Loss: 0.6927157999128652, Validation Loss: 0.69291083792879Epoch 484/800, Training Loss: 0.6927147034368368, Validation Loss: 0.69290966462792Epoch 485/800, Training Loss: 0.6927135600150083, Validation Loss: 0.6929084357586087Epoch 486/800, Training Loss: 0.692712367049457, Validation Loss: 0.6929071480794462Epoch 487/800, Training Loss: 0.6927111217666393, Validation Loss: 0.6929057981221913Epoch 488/800, Training Loss: 0.6927098212033294, Validation Loss: 0.6929043821731538Epoch 489/800, Training Loss: 0.6927084621912656, Validation Loss: 0.6929028962528172Epoch 490/800, Training Loss: 0.6927070413403439, Validation Loss: 0.6929013360935113Epoch 491/800, Training Loss: 0.6927055550202253, Validation Loss: 0.6928996971149278Epoch 492/800, Training Loss: 0.6927039993401689, Validation Loss: 0.6928979743972288Epoch 493/800, Training Loss: 0.6927023701269066, Validation Loss: 0.6928961626514986Epoch 494/800, Training Loss: 0.6927006629003392, Validation Loss: 0.692894256187224Epoch 495/800, Training Loss: 0.6926988728468094, Validation Loss: 0.6928922488764708Epoch 496/800, Training Loss: 0.6926969947896644, Validation Loss: 0.6928901341143756Epoch 497/800, Training Loss: 0.6926950231568292, Validation Loss: 0.6928879047755176Epoch 498/800, Training Loss: 0.6926929519449961, Validation Loss: 0.6928855531656861Epoch 499/800, Training Loss: 0.6926907746800702, Validation Loss: 0.6928830709684879Epoch 500/800, Training Loss: 0.6926884843733918, Validation Loss: 0.6928804491861706Epoch 501/800, Training Loss: 0.6926860734732474, Validation Loss: 0.6928776780739384Epoch 502/800, Training Loss: 0.6926835338110734, Validation Loss: 0.6928747470669625Epoch 503/800, Training Loss: 0.6926808565416894, Validation Loss: 0.6928716446991454Epoch 504/800, Training Loss: 0.6926780320768238, Validation Loss: 0.6928683585125861Epoch 505/800, Training Loss: 0.6926750500110357, Validation Loss: 0.6928648749565303Epoch 506/800, Training Loss: 0.6926718990390803, Validation Loss: 0.6928611792744219Epoch 507/800, Training Loss: 0.692668566863558, Validation Loss: 0.6928572553774439Epoch 508/800, Training Loss: 0.6926650400915598, Validation Loss: 0.6928530857027233Epoch 509/800, Training Loss: 0.6926613041187895, Validation Loss: 0.6928486510540707Epoch 510/800, Training Loss: 0.6926573429994628, Validation Loss: 0.6928439304227977Epoch 511/800, Training Loss: 0.6926531392999425, Validation Loss: 0.6928389007857796Epoch 512/800, Training Loss: 0.6926486739338196, Validation Loss: 0.6928335368774617Epoch 513/800, Training Loss: 0.6926439259757411, Validation Loss: 0.692827810931963Epoch 514/800, Training Loss: 0.6926388724508229, Validation Loss: 0.6928216923908179Epoch 515/800, Training Loss: 0.6926334880960374, Validation Loss: 0.6928151475711117Epoch 516/800, Training Loss: 0.692627745089245, Validation Loss: 0.6928081392878869Epoch 517/800, Training Loss: 0.6926216127408882, Validation Loss: 0.6928006264236171Epoch 518/800, Training Loss: 0.6926150571424399, Validation Loss: 0.6927925634362712Epoch 519/800, Training Loss: 0.6926080407646472, Validation Loss: 0.692783899795948Epoch 520/800, Training Loss: 0.6926005219973701, Validation Loss: 0.6927745793382247Epoch 521/800, Training Loss: 0.6925924546212663, Validation Loss: 0.6927645395201258Epoch 522/800, Training Loss: 0.6925837871997576, Validation Loss: 0.6927537105619425Epoch 523/800, Training Loss: 0.6925744623774659, Validation Loss: 0.6927420144548349Epoch 524/800, Training Loss: 0.6925644160686255, Validation Loss: 0.6927293638101967Epoch 525/800, Training Loss: 0.6925535765156576, Validation Loss: 0.6927156605218638Epoch 526/800, Training Loss: 0.6925418631940485, Validation Loss: 0.6927007942062849Epoch 527/800, Training Loss: 0.6925291855347238, Validation Loss: 0.6926846403783966Epoch 528/800, Training Loss: 0.6925154414289734, Validation Loss: 0.6926670583118528Epoch 529/800, Training Loss: 0.6925005154733722, Validation Loss: 0.6926478885209205Epoch 530/800, Training Loss: 0.6924842769027124, Validation Loss: 0.6926269497872763Epoch 531/800, Training Loss: 0.692466577147161, Validation Loss: 0.6926040356372332Epoch 532/800, Training Loss: 0.6924472469350234, Validation Loss: 0.6925789101527051Epoch 533/800, Training Loss: 0.6924260928438115, Validation Loss: 0.6925513029710727Epoch 534/800, Training Loss: 0.6924028931786422, Validation Loss: 0.692520903293359Epoch 535/800, Training Loss: 0.6923773930267846, Validation Loss: 0.6924873526744537Epoch 536/800, Training Loss: 0.6923492982985759, Validation Loss: 0.6924502363104257Epoch 537/800, Training Loss: 0.6923182685151016, Validation Loss: 0.6924090724621715Epoch 538/800, Training Loss: 0.6922839080385771, Validation Loss: 0.6923632995561138Epoch 539/800, Training Loss: 0.6922457553573547, Validation Loss: 0.6923122603738594Epoch 540/800, Training Loss: 0.6922032699271632, Validation Loss: 0.6922551825731513Epoch 541/800, Training Loss: 0.6921558159247506, Validation Loss: 0.6921911545578568Epoch 542/800, Training Loss: 0.6921026420764912, Validation Loss: 0.6921190954149248Epoch 543/800, Training Loss: 0.6920428564653958, Validation Loss: 0.6920377172333181Epoch 544/800, Training Loss: 0.6919753948702465, Validation Loss: 0.6919454775740321Epoch 545/800, Training Loss: 0.691898980714811, Validation Loss: 0.691840519114723Epoch 546/800, Training Loss: 0.6918120740526668, Validation Loss: 0.6917205924656058Epoch 547/800, Training Loss: 0.6917128061103202, Validation Loss: 0.6915829567261792Epoch 548/800, Training Loss: 0.6915988946508081, Validation Loss: 0.6914242503507062Epoch 549/800, Training Loss: 0.6914675336430276, Validation Loss: 0.6912403220553562Epoch 550/800, Training Loss: 0.6913152481921593, Validation Loss: 0.6910260074436712Epoch 551/800, Training Loss: 0.6911377020470877, Validation Loss: 0.6907748311619057Epoch 552/800, Training Loss: 0.6909294397084207, Validation Loss: 0.6904786058217323Epoch 553/800, Training Loss: 0.690683537378838, Validation Loss: 0.6901268862513238Epoch 554/800, Training Loss: 0.6903911254238988, Validation Loss: 0.6897062186762546Epoch 555/800, Training Loss: 0.6900407275984242, Validation Loss: 0.6891990957421473Epoch 556/800, Training Loss: 0.689617335792116, Validation Loss: 0.6885824843745262Epoch 557/800, Training Loss: 0.689101098264125, Validation Loss: 0.687825725501206Epoch 558/800, Training Loss: 0.6884654359312025, Validation Loss: 0.686887498420557Epoch 559/800, Training Loss: 0.687674301878342, Validation Loss: 0.6857113752569466Epoch 560/800, Training Loss: 0.6866781426774643, Validation Loss: 0.6842192263997577Epoch 561/800, Training Loss: 0.6854078737355652, Validation Loss: 0.6823013211408738Epoch 562/800, Training Loss: 0.6837657981731295, Validation Loss: 0.6798013227613883Epoch 563/800, Training Loss: 0.6816118249694054, Validation Loss: 0.6764934235713402Epoch 564/800, Training Loss: 0.6787425543676593, Validation Loss: 0.6720476073426307Epoch 565/800, Training Loss: 0.6748599576965255, Validation Loss: 0.6659778743552172Epoch 566/800, Training Loss: 0.6695263102437116, Validation Loss: 0.6575690091970945Epoch 567/800, Training Loss: 0.6621055451422042, Validation Loss: 0.6457858227213958Epoch 568/800, Training Loss: 0.6517059544599773, Validation Loss: 0.6291993407497745Epoch 569/800, Training Loss: 0.6371809230726447, Validation Loss: 0.6060430950122789Epoch 570/800, Training Loss: 0.6173264232558419, Validation Loss: 0.5746443631624563Epoch 571/800, Training Loss: 0.5914765253258951, Validation Loss: 0.5345004075847035Epoch 572/800, Training Loss: 0.5604757863105582, Validation Loss: 0.4877175669290289Epoch 573/800, Training Loss: 0.5272644357671284, Validation Loss: 0.4393384998467834Epoch 574/800, Training Loss: 0.49593227877873647, Validation Loss: 0.3952030865160353Epoch 575/800, Training Loss: 0.46959151236641256, Validation Loss: 0.35893180271967207Epoch 576/800, Training Loss: 0.4491173007939615, Validation Loss: 0.3310519124714314Epoch 577/800, Training Loss: 0.433658429870832, Validation Loss: 0.3102818328074233Epoch 578/800, Training Loss: 0.42181015093187496, Validation Loss: 0.2949587639019593Epoch 579/800, Training Loss: 0.41236134536213187, Validation Loss: 0.2837035221728151Epoch 580/800, Training Loss: 0.404489078253932, Validation Loss: 0.27553366594921835Epoch 581/800, Training Loss: 0.39768854485648353, Validation Loss: 0.2697682575441433Epoch 582/800, Training Loss: 0.3916643390373693, Validation Loss: 0.2659102125224495Epoch 583/800, Training Loss: 0.3862612403555593, Validation Loss: 0.26357199433111556Epoch 584/800, Training Loss: 0.38142650019383983, Validation Loss: 0.2624452133422825Epoch 585/800, Training Loss: 0.37717844278768203, Validation Loss: 0.2622908498736529Epoch 586/800, Training Loss: 0.37357168716172634, Validation Loss: 0.2629321423050738Epoch 587/800, Training Loss: 0.37066622162550356, Validation Loss: 0.26424647675860113Epoch 588/800, Training Loss: 0.3685098581907379, Validation Loss: 0.26615644654352366Epoch 589/800, Training Loss: 0.36713122202790394, Validation Loss: 0.2686164104732528Epoch 590/800, Training Loss: 0.36653245098089654, Validation Loss: 0.27159400155891056Epoch 591/800, Training Loss: 0.3666791251714393, Validation Loss: 0.27505331008124523Epoch 592/800, Training Loss: 0.36749624199922437, Validation Loss: 0.2789468431853019Epoch 593/800, Training Loss: 0.3688775324076222, Validation Loss: 0.2832170043592564Epoch 594/800, Training Loss: 0.3707044096709903, Validation Loss: 0.2878023584041326Epoch 595/800, Training Loss: 0.3728644248683889, Validation Loss: 0.2926436784182739Epoch 596/800, Training Loss: 0.3752623680169659, Validation Loss: 0.2976876359205269Epoch 597/800, Training Loss: 0.3778235619528468, Validation Loss: 0.3028883168270242Epoch 598/800, Training Loss: 0.3804922750013538, Validation Loss: 0.30820742316584654Epoch 599/800, Training Loss: 0.3832282103150535, Validation Loss: 0.31361382821412515Epoch 600/800, Training Loss: 0.3860028362247007, Validation Loss: 0.3190828474854982Epoch 601/800, Training Loss: 0.3887962816517415, Validation Loss: 0.3245954037559446Epoch 602/800, Training Loss: 0.3915949468752284, Validation Loss: 0.3301371798398268Epoch 603/800, Training Loss: 0.3943897475402865, Validation Loss: 0.33569781375753077Epoch 604/800, Training Loss: 0.39717485082561177, Validation Loss: 0.3412701682005091Epoch 605/800, Training Loss: 0.39994677211058255, Validation Loss: 0.3468496901083731Epoch 606/800, Training Loss: 0.4027037286651104, Validation Loss: 0.3524338645029475Epoch 607/800, Training Loss: 0.40544517478442454, Validation Loss: 0.3580217588938796Epoch 608/800, Training Loss: 0.4081714649880987, Validation Loss: 0.3636136499688242Epoch 609/800, Training Loss: 0.41088360808448215, Validation Loss: 0.36921072212826717Epoch 610/800, Training Loss: 0.41358308623669526, Validation Loss: 0.374814826942364Epoch 611/800, Training Loss: 0.41627172097184506, Validation Loss: 0.380428293144445Epoch 612/800, Training Loss: 0.41895157342603906, Validation Loss: 0.3860537778519955Epoch 613/800, Training Loss: 0.42162486979680225, Validation Loss: 0.39169415099718086Epoch 614/800, Training Loss: 0.4242939455224436, Validation Loss: 0.39735240625584073Epoch 615/800, Training Loss: 0.42696120348993977, Validation Loss: 0.4030315929758414Epoch 616/800, Training Loss: 0.42962908283288864, Validation Loss: 0.4087347646708017Epoch 617/800, Training Loss: 0.4323000357821364, Validation Loss: 0.41446494054789235Epoch 618/800, Training Loss: 0.43497651068257664, Validation Loss: 0.4202250772833164Epoch 619/800, Training Loss: 0.4376609397641674, Validation Loss: 0.42601804886101763Epoch 620/800, Training Loss: 0.44035573060268823, Validation Loss: 0.43184663276799523Epoch 621/800, Training Loss: 0.4430632604599808, Validation Loss: 0.43771350121209396Epoch 622/800, Training Loss: 0.4457858728767473, Validation Loss: 0.44362121631277407Epoch 623/800, Training Loss: 0.4485258760181917, Validation Loss: 0.4495722284267261Epoch 624/800, Training Loss: 0.4512855423528951, Validation Loss: 0.45556887692039594Epoch 625/800, Training Loss: 0.4540671092829542, Validation Loss: 0.46161339279976776Epoch 626/800, Training Loss: 0.4568727803397593, Validation Loss: 0.4677079026609974Epoch 627/800, Training Loss: 0.4597047265128526, Validation Loss: 0.47385443343874967Epoch 628/800, Training Loss: 0.4625650871844226, Validation Loss: 0.4800549174056923Epoch 629/800, Training Loss: 0.4654559699913884, Validation Loss: 0.48631119681943324Epoch 630/800, Training Loss: 0.4683794487201799, Validation Loss: 0.4926250275250556Epoch 631/800, Training Loss: 0.47133755804205907, Validation Loss: 0.49899808070671525Epoch 632/800, Training Loss: 0.4743322835011522, Validation Loss: 0.5054319418478668Epoch 633/800, Training Loss: 0.47736554465059566, Validation Loss: 0.5119281058196933Epoch 634/800, Training Loss: 0.48043916856616314, Validation Loss: 0.5184879668932652Epoch 635/800, Training Loss: 0.4835548501179243, Validation Loss: 0.5251128024002613Epoch 636/800, Training Loss: 0.48671409431073565, Validation Loss: 0.5318037488125645Epoch 637/800, Training Loss: 0.4899181346769147, Validation Loss: 0.538561769280635Epoch 638/800, Training Loss: 0.4931678200986437, Validation Loss: 0.5453876123536754Epoch 639/800, Training Loss: 0.49646346058648005, Validation Loss: 0.5522817630394222Epoch 640/800, Training Loss: 0.4998046206152938, Validation Loss: 0.5592443901530966Epoch 641/800, Training Loss: 0.5031898471148785, Validation Loss: 0.5662752991206412Epoch 642/800, Training Loss: 0.5066163193287603, Validation Loss: 0.5733739088565887Epoch 643/800, Training Loss: 0.5100794121380998, Validation Loss: 0.5805392879412535Epoch 644/800, Training Loss: 0.5135721785211909, Validation Loss: 0.5877703132742581Epoch 645/800, Training Loss: 0.5170847908517636, Validation Loss: 0.5950660585504606Epoch 646/800, Training Loss: 0.520604052213203, Validation Loss: 0.6024265826256284Epoch 647/800, Training Loss: 0.5241132226850904, Validation Loss: 0.6098543591506327Epoch 648/800, Training Loss: 0.527592624230714, Validation Loss: 0.6173566231843599Epoch 649/800, Training Loss: 0.5310217755342662, Validation Loss: 0.6249487842693846Epoch 650/800, Training Loss: 0.5343840255798437, Validation Loss: 0.6326585265050227Epoch 651/800, Training Loss: 0.5376744125035905, Validation Loss: 0.6405289972050731Epoch 652/800, Training Loss: 0.5409100994096407, Validation Loss: 0.6486176812913718Epoch 653/800, Training Loss: 0.5441397047251675, Validation Loss: 0.6569865656757414Epoch 654/800, Training Loss: 0.547444132672402, Validation Loss: 0.6656820744273366Epoch 655/800, Training Loss: 0.5509214950542518, Validation Loss: 0.6747119116232225Epoch 656/800, Training Loss: 0.5546583966560364, Validation Loss: 0.6840344369084703Epoch 657/800, Training Loss: 0.5587050103916733, Validation Loss: 0.6935710580221054Epoch 658/800, Training Loss: 0.5630713927187314, Validation Loss: 0.7032334977533448Epoch 659/800, Training Loss: 0.5677416511116185, Validation Loss: 0.7129485789245662Epoch 660/800, Training Loss: 0.5726892174894237, Validation Loss: 0.7226724630596799Epoch 661/800, Training Loss: 0.5778846521772605, Validation Loss: 0.7323965222972136Epoch 662/800, Training Loss: 0.5832982089639136, Validation Loss: 0.7421482023949101Epoch 663/800, Training Loss: 0.5889009956747349, Validation Loss: 0.7519876116190206Epoch 664/800, Training Loss: 0.5946662690149186, Validation Loss: 0.7619995023017749Epoch 665/800, Training Loss: 0.6005721011646914, Validation Loss: 0.7722826053737688Epoch 666/800, Training Loss: 0.6066113493959698, Validation Loss: 0.782942974064452Epoch 667/800, Training Loss: 0.6128284811759408, Validation Loss: 0.7940992516118343Epoch 668/800, Training Loss: 0.6193947309470789, Validation Loss: 0.8058976395465735Epoch 669/800, Training Loss: 0.6266370786100056, Validation Loss: 0.8185098816524631Epoch 670/800, Training Loss: 0.6348945946401454, Validation Loss: 0.8320916095239518Epoch 671/800, Training Loss: 0.6442980851266544, Validation Loss: 0.846734385567489Epoch 672/800, Training Loss: 0.6547270214870493, Validation Loss: 0.8624147384524969Epoch 673/800, Training Loss: 0.6658355639531028, Validation Loss: 0.8789792563341361Epoch 674/800, Training Loss: 0.6770474580284463, Validation Loss: 0.8962554014733118Epoch 675/800, Training Loss: 0.6878989582242792, Validation Loss: 0.913972452295989Epoch 676/800, Training Loss: 0.6981523912611176, Validation Loss: 0.9318144970693731Epoch 677/800, Training Loss: 0.7106292928339522, Validation Loss: 0.9537227000216933Epoch 678/800, Training Loss: 0.7380946533063248, Validation Loss: 0.9731413759195229Epoch 679/800, Training Loss: 0.7418889403865337, Validation Loss: 0.9978549615496985Epoch 680/800, Training Loss: 0.7603673534192933, Validation Loss: 1.0204527422200242Epoch 681/800, Training Loss: 0.7343126173339166, Validation Loss: 1.0455011217949373Epoch 682/800, Training Loss: 0.7363149051915218, Validation Loss: 1.0610117012702438Epoch 683/800, Training Loss: 0.7370615444655495, Validation Loss: 1.082907726794166Epoch 684/800, Training Loss: 0.7625453895135583, Validation Loss: 1.117118486630546Epoch 685/800, Training Loss: 0.7866463183115754, Validation Loss: 1.1444531282575636Epoch 686/800, Training Loss: 0.7984622351111696, Validation Loss: 1.1740563828255342Epoch 687/800, Training Loss: 0.8163289546271362, Validation Loss: 1.2044074172964427Epoch 688/800, Training Loss: 0.8232866181666578, Validation Loss: 1.2300806177922896Epoch 689/800, Training Loss: 0.8372556075260842, Validation Loss: 1.2636122376787047Epoch 690/800, Training Loss: 0.8558693148799652, Validation Loss: 1.3002948761752946Epoch 691/800, Training Loss: 0.8677572219904397, Validation Loss: 1.3375991903252595Epoch 692/800, Training Loss: 0.9009243363045659, Validation Loss: 1.3905138683363463Epoch 693/800, Training Loss: 0.9416033060819077, Validation Loss: 1.4416245482606804Epoch 694/800, Training Loss: 1.0523658203363677, Validation Loss: 1.4781202337389152Epoch 695/800, Training Loss: 0.9716641115668928, Validation Loss: 1.5531267263073674Epoch 696/800, Training Loss: 1.073930332243422, Validation Loss: 1.5967986706180677Epoch 697/800, Training Loss: 1.096304970261164, Validation Loss: 1.6741549523495372Epoch 698/800, Training Loss: 1.1272677390490755, Validation Loss: 1.7615403559227851Epoch 699/800, Training Loss: 1.1793944930831897, Validation Loss: 1.7949987343478828Epoch 700/800, Training Loss: 1.2197328078612952, Validation Loss: 1.8970016943695907Epoch 701/800, Training Loss: 1.2661929795997593, Validation Loss: 1.9947062556444302Epoch 702/800, Training Loss: 1.2073080100453466, Validation Loss: 2.03841384726344Epoch 703/800, Training Loss: 1.1624960306755179, Validation Loss: 2.023875808169482Epoch 704/800, Training Loss: 1.1656032909648006, Validation Loss: 1.7550603951182981Epoch 705/800, Training Loss: 1.1951150254653369, Validation Loss: 1.774086777386541Epoch 706/800, Training Loss: 1.2910745545920075, Validation Loss: 1.8114430445570657Epoch 707/800, Training Loss: 1.267531255071704, Validation Loss: 2.1629269771443917Epoch 708/800, Training Loss: 1.2980035041860587, Validation Loss: 2.5108339809081763Epoch 709/800, Training Loss: 1.4442765353153078, Validation Loss: 2.630504663185228Epoch 710/800, Training Loss: 1.4618802415911012, Validation Loss: 2.7677858212064037Epoch 711/800, Training Loss: 1.519043921123401, Validation Loss: 2.860332689529201Epoch 712/800, Training Loss: 1.686962752484168, Validation Loss: 2.8103377191346217Epoch 713/800, Training Loss: 1.732436010863958, Validation Loss: 2.9638505874580736Epoch 714/800, Training Loss: 1.8796353260317837, Validation Loss: 3.2042229360291166Epoch 715/800, Training Loss: 2.159111284207088, Validation Loss: 3.259599625670971Epoch 716/800, Training Loss: 2.122139935739929, Validation Loss: 3.266642766625458Epoch 717/800, Training Loss: 2.4305762449216965, Validation Loss: 3.5295529009934397Epoch 718/800, Training Loss: 2.6727155625304064, Validation Loss: 3.539215214543799Epoch 719/800, Training Loss: 2.840410305486177, Validation Loss: 3.7994685338599816Epoch 720/800, Training Loss: 2.9802989680972134, Validation Loss: 4.124119271095455Epoch 721/800, Training Loss: 3.838881574085458, Validation Loss: 3.9054588427850043Epoch 722/800, Training Loss: 3.9774446069024783, Validation Loss: 7.2577706580027765Epoch 723/800, Training Loss: 6.291112560381213, Validation Loss: 9.252795949507615Epoch 724/800, Training Loss: 7.165613772578323, Validation Loss: 11.354099529309474Epoch 725/800, Training Loss: 9.214350134243341, Validation Loss: 14.083770282359959Epoch 726/800, Training Loss: 12.545348279453062, Validation Loss: 3.334793604665618Epoch 727/800, Training Loss: 20.864290932011144, Validation Loss: 23.104793844310812Epoch 728/800, Training Loss: nan, Validation Loss: nanEpoch 729/800, Training Loss: nan, Validation Loss: nanEpoch 730/800, Training Loss: nan, Validation Loss: nanEpoch 731/800, Training Loss: nan, Validation Loss: nanEpoch 732/800, Training Loss: nan, Validation Loss: nanEpoch 733/800, Training Loss: nan, Validation Loss: nanEpoch 734/800, Training Loss: nan, Validation Loss: nanEpoch 735/800, Training Loss: nan, Validation Loss: nanEpoch 736/800, Training Loss: nan, Validation Loss: nanEpoch 737/800, Training Loss: nan, Validation Loss: nanEpoch 738/800, Training Loss: nan, Validation Loss: nanEpoch 739/800, Training Loss: nan, Validation Loss: nanEpoch 740/800, Training Loss: nan, Validation Loss: nanEpoch 741/800, Training Loss: nan, Validation Loss: nanEpoch 742/800, Training Loss: nan, Validation Loss: nanEpoch 743/800, Training Loss: nan, Validation Loss: nanEpoch 744/800, Training Loss: nan, Validation Loss: nanEpoch 745/800, Training Loss: nan, Validation Loss: nanEpoch 746/800, Training Loss: nan, Validation Loss: nanEpoch 747/800, Training Loss: nan, Validation Loss: nanEpoch 748/800, Training Loss: nan, Validation Loss: nanEpoch 749/800, Training Loss: nan, Validation Loss: nanEpoch 750/800, Training Loss: nan, Validation Loss: nanEpoch 751/800, Training Loss: nan, Validation Loss: nanEpoch 752/800, Training Loss: nan, Validation Loss: nanEpoch 753/800, Training Loss: nan, Validation Loss: nanEpoch 754/800, Training Loss: nan, Validation Loss: nanEpoch 755/800, Training Loss: nan, Validation Loss: nanEpoch 756/800, Training Loss: nan, Validation Loss: nanEpoch 757/800, Training Loss: nan, Validation Loss: nanEpoch 758/800, Training Loss: nan, Validation Loss: nanEpoch 759/800, Training Loss: nan, Validation Loss: nanEpoch 760/800, Training Loss: nan, Validation Loss: nanEpoch 761/800, Training Loss: nan, Validation Loss: nanEpoch 762/800, Training Loss: nan, Validation Loss: nanEpoch 763/800, Training Loss: nan, Validation Loss: nanEpoch 764/800, Training Loss: nan, Validation Loss: nanEpoch 765/800, Training Loss: nan, Validation Loss: nanEpoch 766/800, Training Loss: nan, Validation Loss: nanEpoch 767/800, Training Loss: nan, Validation Loss: nanEpoch 768/800, Training Loss: nan, Validation Loss: nanEpoch 769/800, Training Loss: nan, Validation Loss: nanEpoch 770/800, Training Loss: nan, Validation Loss: nanEpoch 771/800, Training Loss: nan, Validation Loss: nanEpoch 772/800, Training Loss: nan, Validation Loss: nanEpoch 773/800, Training Loss: nan, Validation Loss: nanEpoch 774/800, Training Loss: nan, Validation Loss: nanEpoch 775/800, Training Loss: nan, Validation Loss: nanEpoch 776/800, Training Loss: nan, Validation Loss: nanEpoch 777/800, Training Loss: nan, Validation Loss: nanEpoch 778/800, Training Loss: nan, Validation Loss: nanEpoch 779/800, Training Loss: nan, Validation Loss: nanEpoch 780/800, Training Loss: nan, Validation Loss: nanEpoch 781/800, Training Loss: nan, Validation Loss: nanEpoch 782/800, Training Loss: nan, Validation Loss: nanEpoch 783/800, Training Loss: nan, Validation Loss: nanEpoch 784/800, Training Loss: nan, Validation Loss: nanEpoch 785/800, Training Loss: nan, Validation Loss: nanEpoch 786/800, Training Loss: nan, Validation Loss: nanEpoch 787/800, Training Loss: nan, Validation Loss: nanEpoch 788/800, Training Loss: nan, Validation Loss: nanEpoch 789/800, Training Loss: nan, Validation Loss: nanEpoch 790/800, Training Loss: nan, Validation Loss: nanEpoch 791/800, Training Loss: nan, Validation Loss: nanEpoch 792/800, Training Loss: nan, Validation Loss: nanEpoch 793/800, Training Loss: nan, Validation Loss: nanEpoch 794/800, Training Loss: nan, Validation Loss: nanEpoch 795/800, Training Loss: nan, Validation Loss: nanEpoch 796/800, Training Loss: nan, Validation Loss: nanEpoch 797/800, Training Loss: nan, Validation Loss: nanEpoch 798/800, Training Loss: nan, Validation Loss: nanEpoch 799/800, Training Loss: nan, Validation Loss: nanEpoch 800/800, Training Loss: nan, Validation Loss: nanEpoch 1/100, Training Loss: 0.6939495047955109, Validation Loss: 0.6928677106854068Epoch 2/100, Training Loss: 0.6938394321868371, Validation Loss: 0.692980465250653Epoch 3/100, Training Loss: 0.6938638371901941, Validation Loss: 0.6930412935286039Epoch 4/100, Training Loss: 0.6938811651272376, Validation Loss: 0.6930660490011861Epoch 5/100, Training Loss: 0.6938885344312675, Validation Loss: 0.6930754005772882Epoch 6/100, Training Loss: 0.6938913012495254, Validation Loss: 0.6930788418119702Epoch 7/100, Training Loss: 0.6938922628577378, Validation Loss: 0.6930800905563046Epoch 8/100, Training Loss: 0.6938925501592138, Validation Loss: 0.6930805354334831Epoch 9/100, Training Loss: 0.6938925899151787, Validation Loss: 0.6930806868203534Epoch 10/100, Training Loss: 0.6938925395407407, Validation Loss: 0.6930807312404579Epoch 11/100, Training Loss: 0.6938924566378524, Validation Loss: 0.6930807367247376Epoch 12/100, Training Loss: 0.6938923622244462, Validation Loss: 0.6930807280690954Epoch 13/100, Training Loss: 0.6938922639615346, Validation Loss: 0.6930807143090181Epoch 14/100, Training Loss: 0.6938921646400946, Validation Loss: 0.693080698736565Epoch 15/100, Training Loss: 0.6938920652762339, Validation Loss: 0.6930806825508935Epoch 16/100, Training Loss: 0.6938919662392784, Validation Loss: 0.6930806661884863Epoch 17/100, Training Loss: 0.6938918676628546, Validation Loss: 0.693080649807886Epoch 18/100, Training Loss: 0.6938917695946798, Validation Loss: 0.6930806334663394Epoch 19/100, Training Loss: 0.6938916720511565, Validation Loss: 0.6930806171841722Epoch 20/100, Training Loss: 0.6938915750372637, Validation Loss: 0.6930806009682554Epoch 21/100, Training Loss: 0.6938914785538088, Validation Loss: 0.6930805848205557Epoch 22/100, Training Loss: 0.6938913826000712, Validation Loss: 0.6930805687412506Epoch 23/100, Training Loss: 0.6938912871747659, Validation Loss: 0.693080552729868Epoch 24/100, Training Loss: 0.6938911922763966, Validation Loss: 0.6930805367856983Epoch 25/100, Training Loss: 0.6938910979033831, Validation Loss: 0.6930805209079441Epoch 26/100, Training Loss: 0.6938910040541091, Validation Loss: 0.6930805050957773Epoch 27/100, Training Loss: 0.6938909107269393, Validation Loss: 0.6930804893483562Epoch 28/100, Training Loss: 0.6938908179202284, Validation Loss: 0.6930804736648342Epoch 29/100, Training Loss: 0.6938907256323217, Validation Loss: 0.6930804580443608Epoch 30/100, Training Loss: 0.6938906338615572, Validation Loss: 0.6930804424860832Epoch 31/100, Training Loss: 0.6938905426062665, Validation Loss: 0.6930804269891458Epoch 32/100, Training Loss: 0.6938904518647775, Validation Loss: 0.6930804115526903Epoch 33/100, Training Loss: 0.6938903616354102, Validation Loss: 0.6930803961758536Epoch 34/100, Training Loss: 0.6938902719164826, Validation Loss: 0.693080380857769Epoch 35/100, Training Loss: 0.6938901827063086, Validation Loss: 0.6930803655975669Epoch 36/100, Training Loss: 0.6938900940031985, Validation Loss: 0.6930803503943713Epoch 37/100, Training Loss: 0.6938900058054605, Validation Loss: 0.693080335247303Epoch 38/100, Training Loss: 0.6938899181114, Validation Loss: 0.6930803201554736Epoch 39/100, Training Loss: 0.6938898309193219, Validation Loss: 0.6930803051179926Epoch 40/100, Training Loss: 0.69388974422753, Validation Loss: 0.6930802901339617Epoch 41/100, Training Loss: 0.6938896580343256, Validation Loss: 0.6930802752024751Epoch 42/100, Training Loss: 0.6938895723380134, Validation Loss: 0.6930802603226199Epoch 43/100, Training Loss: 0.693889487136895, Validation Loss: 0.6930802454934758Epoch 44/100, Training Loss: 0.6938894024292763, Validation Loss: 0.6930802307141143Epoch 45/100, Training Loss: 0.6938893182134631, Validation Loss: 0.6930802159835978Epoch 46/100, Training Loss: 0.6938892344877623, Validation Loss: 0.6930802013009798Epoch 47/100, Training Loss: 0.6938891512504833, Validation Loss: 0.6930801866653044Epoch 48/100, Training Loss: 0.6938890684999398, Validation Loss: 0.693080172075605Epoch 49/100, Training Loss: 0.6938889862344493, Validation Loss: 0.6930801575309052Epoch 50/100, Training Loss: 0.6938889044523299, Validation Loss: 0.693080143030217Epoch 51/100, Training Loss: 0.693888823151906, Validation Loss: 0.6930801285725405Epoch 52/100, Training Loss: 0.6938887423315072, Validation Loss: 0.6930801141568639Epoch 53/100, Training Loss: 0.6938886619894655, Validation Loss: 0.6930800997821637Epoch 54/100, Training Loss: 0.693888582124121, Validation Loss: 0.6930800854474006Epoch 55/100, Training Loss: 0.6938885027338173, Validation Loss: 0.6930800711515249Epoch 56/100, Training Loss: 0.6938884238169075, Validation Loss: 0.6930800568934694Epoch 57/100, Training Loss: 0.6938883453717481, Validation Loss: 0.6930800426721534Epoch 58/100, Training Loss: 0.6938882673967031, Validation Loss: 0.6930800284864811Epoch 59/100, Training Loss: 0.6938881898901461, Validation Loss: 0.6930800143353386Epoch 60/100, Training Loss: 0.6938881128504554, Validation Loss: 0.6930800002175966Epoch 61/100, Training Loss: 0.6938880362760199, Validation Loss: 0.6930799861321082Epoch 62/100, Training Loss: 0.6938879601652342, Validation Loss: 0.6930799720777068Epoch 63/100, Training Loss: 0.6938878845165062, Validation Loss: 0.6930799580532079Epoch 64/100, Training Loss: 0.6938878093282501, Validation Loss: 0.6930799440574071Epoch 65/100, Training Loss: 0.6938877345988899, Validation Loss: 0.6930799300890802Epoch 66/100, Training Loss: 0.6938876603268592, Validation Loss: 0.6930799161469793Epoch 67/100, Training Loss: 0.6938875865106038, Validation Loss: 0.6930799022298368Epoch 68/100, Training Loss: 0.6938875131485791, Validation Loss: 0.6930798883363611Epoch 69/100, Training Loss: 0.6938874402392514, Validation Loss: 0.693079874465236Epoch 70/100, Training Loss: 0.6938873677810989, Validation Loss: 0.6930798606151223Epoch 71/100, Training Loss: 0.6938872957726111, Validation Loss: 0.6930798467846533Epoch 72/100, Training Loss: 0.6938872242122915, Validation Loss: 0.6930798329724362Epoch 73/100, Training Loss: 0.693887153098653, Validation Loss: 0.69307981917705Epoch 74/100, Training Loss: 0.693887082430223, Validation Loss: 0.6930798053970463Epoch 75/100, Training Loss: 0.6938870122055423, Validation Loss: 0.6930797916309451Epoch 76/100, Training Loss: 0.6938869424231661, Validation Loss: 0.6930797778772366Epoch 77/100, Training Loss: 0.6938868730816614, Validation Loss: 0.6930797641343774Epoch 78/100, Training Loss: 0.6938868041796104, Validation Loss: 0.693079750400792Epoch 79/100, Training Loss: 0.693886735715611, Validation Loss: 0.6930797366748696Epoch 80/100, Training Loss: 0.6938866676882733, Validation Loss: 0.6930797229549628Epoch 81/100, Training Loss: 0.6938866000962255, Validation Loss: 0.6930797092393877Epoch 82/100, Training Loss: 0.6938865329381085, Validation Loss: 0.6930796955264207Epoch 83/100, Training Loss: 0.6938864662125822, Validation Loss: 0.6930796818142986Epoch 84/100, Training Loss: 0.6938863999183208, Validation Loss: 0.6930796681012149Epoch 85/100, Training Loss: 0.6938863340540173, Validation Loss: 0.6930796543853214Epoch 86/100, Training Loss: 0.6938862686183777, Validation Loss: 0.6930796406647228Epoch 87/100, Training Loss: 0.6938862036101285, Validation Loss: 0.6930796269374777Epoch 88/100, Training Loss: 0.6938861390280116, Validation Loss: 0.6930796132015961Epoch 89/100, Training Loss: 0.69388607487079, Validation Loss: 0.6930795994550366Epoch 90/100, Training Loss: 0.6938860111372416, Validation Loss: 0.6930795856957048Epoch 91/100, Training Loss: 0.6938859478261651, Validation Loss: 0.6930795719214522Epoch 92/100, Training Loss: 0.6938858849363758, Validation Loss: 0.6930795581300725Epoch 93/100, Training Loss: 0.6938858224667117, Validation Loss: 0.6930795443193004Epoch 94/100, Training Loss: 0.6938857604160268, Validation Loss: 0.6930795304868094Epoch 95/100, Training Loss: 0.6938856987831977, Validation Loss: 0.6930795166302073Epoch 96/100, Training Loss: 0.6938856375671199, Validation Loss: 0.6930795027470367Epoch 97/100, Training Loss: 0.693885576766709, Validation Loss: 0.6930794888347702Epoch 98/100, Training Loss: 0.6938855163809023, Validation Loss: 0.6930794748908067Epoch 99/100, Training Loss: 0.6938854564086578, Validation Loss: 0.6930794609124722Epoch 100/100, Training Loss: 0.693885396848956, Validation Loss: 0.6930794468970125Epoch 1/300, Training Loss: 0.6939448617102483, Validation Loss: 0.6928522620109657Epoch 2/300, Training Loss: 0.6937867769918284, Validation Loss: 0.6929595490438003Epoch 3/300, Training Loss: 0.6938051697992917, Validation Loss: 0.693025772481237Epoch 4/300, Training Loss: 0.6938235502607399, Validation Loss: 0.6930546179861967Epoch 5/300, Training Loss: 0.6938322863577813, Validation Loss: 0.6930661213200305Epoch 6/300, Training Loss: 0.6938359665623861, Validation Loss: 0.6930705808208948Epoch 7/300, Training Loss: 0.6938375238295096, Validation Loss: 0.6930722979872412Epoch 8/300, Training Loss: 0.6938382445924463, Validation Loss: 0.6930729635912216Epoch 9/300, Training Loss: 0.6938386425730322, Validation Loss: 0.6930732282407928Epoch 10/300, Training Loss: 0.6938389165780288, Validation Loss: 0.6930733403125815Epoch 11/300, Training Loss: 0.6938391426640529, Validation Loss: 0.6930733943345335Epoch 12/300, Training Loss: 0.6938393497988281, Validation Loss: 0.6930734262353677Epoch 13/300, Training Loss: 0.6938395489944261, Validation Loss: 0.6930734496614541Epoch 14/300, Training Loss: 0.6938397444323433, Validation Loss: 0.6930734697925104Epoch 15/300, Training Loss: 0.6938399376982093, Validation Loss: 0.6930734885923702Epoch 16/300, Training Loss: 0.6938401293915992, Validation Loss: 0.6930735068036747Epoch 17/300, Training Loss: 0.6938403197376269, Validation Loss: 0.6930735247054163Epoch 18/300, Training Loss: 0.6938405088192376, Validation Loss: 0.6930735424005835Epoch 19/300, Training Loss: 0.6938406966653935, Validation Loss: 0.6930735599253602Epoch 20/300, Training Loss: 0.6938408832845625, Validation Loss: 0.6930735772905742Epoch 21/300, Training Loss: 0.6938410686774151, Validation Loss: 0.6930735944974218Epoch 22/300, Training Loss: 0.6938412528416482, Validation Loss: 0.6930736115434367Epoch 23/300, Training Loss: 0.6938414357737974, Validation Loss: 0.6930736284247462Epoch 24/300, Training Loss: 0.6938416174699377, Validation Loss: 0.6930736451369277Epoch 25/300, Training Loss: 0.6938417979259266, Validation Loss: 0.693073661675329Epoch 26/300, Training Loss: 0.693841977137503, Validation Loss: 0.6930736780351852Epoch 27/300, Training Loss: 0.6938421551003139, Validation Loss: 0.693073694211662Epoch 28/300, Training Loss: 0.6938423318099219, Validation Loss: 0.6930737101998673Epoch 29/300, Training Loss: 0.6938425072617999, Validation Loss: 0.693073725994852Epoch 30/300, Training Loss: 0.6938426814513285, Validation Loss: 0.6930737415916065Epoch 31/300, Training Loss: 0.6938428543737831, Validation Loss: 0.6930737569850562Epoch 32/300, Training Loss: 0.6938430260243265, Validation Loss: 0.6930737721700557Epoch 33/300, Training Loss: 0.693843196398001, Validation Loss: 0.6930737871413821Epoch 34/300, Training Loss: 0.6938433654897196, Validation Loss: 0.693073801893729Epoch 35/300, Training Loss: 0.6938435332942529, Validation Loss: 0.6930738164217003Epoch 36/300, Training Loss: 0.6938436998062211, Validation Loss: 0.6930738307198022Epoch 37/300, Training Loss: 0.693843865020085, Validation Loss: 0.6930738447824363Epoch 38/300, Training Loss: 0.6938440289301294, Validation Loss: 0.6930738586038911Epoch 39/300, Training Loss: 0.6938441915304607, Validation Loss: 0.6930738721783376Epoch 40/300, Training Loss: 0.6938443528149889, Validation Loss: 0.693073885499817Epoch 41/300, Training Loss: 0.6938445127774168, Validation Loss: 0.6930738985622356Epoch 42/300, Training Loss: 0.6938446714112303, Validation Loss: 0.693073911359354Epoch 43/300, Training Loss: 0.6938448287096847, Validation Loss: 0.6930739238847787Epoch 44/300, Training Loss: 0.6938449846657906, Validation Loss: 0.6930739361319532Epoch 45/300, Training Loss: 0.6938451392723, Validation Loss: 0.6930739480941466Epoch 46/300, Training Loss: 0.6938452925216945, Validation Loss: 0.6930739597644449Epoch 47/300, Training Loss: 0.6938454444061704, Validation Loss: 0.6930739711357381Epoch 48/300, Training Loss: 0.69384559491762, Validation Loss: 0.6930739822007114Epoch 49/300, Training Loss: 0.6938457440476183, Validation Loss: 0.6930739929518313Epoch 50/300, Training Loss: 0.6938458917874106, Validation Loss: 0.6930740033813334Epoch 51/300, Training Loss: 0.6938460381278863, Validation Loss: 0.6930740134812099Epoch 52/300, Training Loss: 0.6938461830595675, Validation Loss: 0.6930740232431943Epoch 53/300, Training Loss: 0.6938463265725903, Validation Loss: 0.6930740326587491Epoch 54/300, Training Loss: 0.6938464686566818, Validation Loss: 0.6930740417190486Epoch 55/300, Training Loss: 0.6938466093011423, Validation Loss: 0.6930740504149641Epoch 56/300, Training Loss: 0.6938467484948225, Validation Loss: 0.6930740587370451Epoch 57/300, Training Loss: 0.6938468862261022, Validation Loss: 0.6930740666755034Epoch 58/300, Training Loss: 0.693847022482866, Validation Loss: 0.693074074220194Epoch 59/300, Training Loss: 0.6938471572524796, Validation Loss: 0.6930740813605923Epoch 60/300, Training Loss: 0.6938472905217612, Validation Loss: 0.6930740880857771Epoch 61/300, Training Loss: 0.6938474222769603, Validation Loss: 0.6930740943844044Epoch 62/300, Training Loss: 0.6938475525037222, Validation Loss: 0.6930741002446872Epoch 63/300, Training Loss: 0.6938476811870623, Validation Loss: 0.6930741056543673Epoch 64/300, Training Loss: 0.6938478083113334, Validation Loss: 0.6930741106006915Epoch 65/300, Training Loss: 0.6938479338601925, Validation Loss: 0.6930741150703811Epoch 66/300, Training Loss: 0.6938480578165661, Validation Loss: 0.6930741190496026Epoch 67/300, Training Loss: 0.6938481801626126, Validation Loss: 0.6930741225239381Epoch 68/300, Training Loss: 0.6938483008796841, Validation Loss: 0.6930741254783468Epoch 69/300, Training Loss: 0.6938484199482857, Validation Loss: 0.6930741278971352Epoch 70/300, Training Loss: 0.6938485373480272, Validation Loss: 0.6930741297639128Epoch 71/300, Training Loss: 0.6938486530575876, Validation Loss: 0.693074131061556Epoch 72/300, Training Loss: 0.6938487670546547, Validation Loss: 0.6930741317721624Epoch 73/300, Training Loss: 0.693848879315882, Validation Loss: 0.6930741318770053Epoch 74/300, Training Loss: 0.6938489898168296, Validation Loss: 0.6930741313564835Epoch 75/300, Training Loss: 0.69384909853191, Validation Loss: 0.6930741301900698Epoch 76/300, Training Loss: 0.6938492054343222, Validation Loss: 0.6930741283562543Epoch 77/300, Training Loss: 0.6938493104959899, Validation Loss: 0.6930741258324832Epoch 78/300, Training Loss: 0.693849413687492, Validation Loss: 0.6930741225950958Epoch 79/300, Training Loss: 0.6938495149779897, Validation Loss: 0.6930741186192548Epoch 80/300, Training Loss: 0.6938496143351444, Validation Loss: 0.6930741138788719Epoch 81/300, Training Loss: 0.6938497117250402, Validation Loss: 0.69307410834653Epoch 82/300, Training Loss: 0.6938498071120883, Validation Loss: 0.6930741019933963Epoch 83/300, Training Loss: 0.6938499004589408, Validation Loss: 0.6930740947891316Epoch 84/300, Training Loss: 0.6938499917263824, Validation Loss: 0.6930740867017932Epoch 85/300, Training Loss: 0.6938500808732244, Validation Loss: 0.6930740776977272Epoch 86/300, Training Loss: 0.6938501678561918, Validation Loss: 0.693074067741456Epoch 87/300, Training Loss: 0.6938502526297985, Validation Loss: 0.6930740567955558Epoch 88/300, Training Loss: 0.6938503351462154, Validation Loss: 0.6930740448205254Epoch 89/300, Training Loss: 0.6938504153551277, Validation Loss: 0.69307403177464Epoch 90/300, Training Loss: 0.6938504932035858, Validation Loss: 0.6930740176138013Epoch 91/300, Training Loss: 0.6938505686358402, Validation Loss: 0.6930740022913691Epoch 92/300, Training Loss: 0.693850641593171, Validation Loss: 0.693073985757982Epoch 93/300, Training Loss: 0.6938507120136937, Validation Loss: 0.6930739679613602Epoch 94/300, Training Loss: 0.693850779832162, Validation Loss: 0.6930739488460975Epoch 95/300, Training Loss: 0.6938508449797498, Validation Loss: 0.6930739283534295Epoch 96/300, Training Loss: 0.6938509073838132, Validation Loss: 0.693073906420985Epoch 97/300, Training Loss: 0.693850966967641, Validation Loss: 0.6930738829825146Epoch 98/300, Training Loss: 0.6938510236501813, Validation Loss: 0.6930738579675984Epoch 99/300, Training Loss: 0.6938510773457407, Validation Loss: 0.693073831301322Epoch 100/300, Training Loss: 0.6938511279636725, Validation Loss: 0.6930738029039295Epoch 101/300, Training Loss: 0.693851175408023, Validation Loss: 0.6930737726904388Epoch 102/300, Training Loss: 0.6938512195771578, Validation Loss: 0.6930737405702265Epoch 103/300, Training Loss: 0.6938512603633572, Validation Loss: 0.6930737064465688Epoch 104/300, Training Loss: 0.6938512976523634, Validation Loss: 0.6930736702161455Epoch 105/300, Training Loss: 0.6938513313229079, Validation Loss: 0.6930736317684868Epoch 106/300, Training Loss: 0.6938513612461786, Validation Loss: 0.6930735909853749Epoch 107/300, Training Loss: 0.693851387285247, Validation Loss: 0.6930735477401806Epoch 108/300, Training Loss: 0.6938514092944416, Validation Loss: 0.693073501897137Epoch 109/300, Training Loss: 0.6938514271186608, Validation Loss: 0.6930734533105343Epoch 110/300, Training Loss: 0.6938514405926224, Validation Loss: 0.6930734018238371Epoch 111/300, Training Loss: 0.6938514495400396, Validation Loss: 0.6930733472687056Epoch 112/300, Training Loss: 0.6938514537727146, Validation Loss: 0.6930732894639111Epoch 113/300, Training Loss: 0.6938514530895454, Validation Loss: 0.6930732282141396Epoch 114/300, Training Loss: 0.6938514472754245, Validation Loss: 0.6930731633086579Epoch 115/300, Training Loss: 0.6938514361000383, Validation Loss: 0.6930730945198368Epoch 116/300, Training Loss: 0.693851419316519, Validation Loss: 0.693073021601503Epoch 117/300, Training Loss: 0.693851396659976, Validation Loss: 0.6930729442871025Epoch 118/300, Training Loss: 0.6938513678458504, Validation Loss: 0.6930728622876517Epoch 119/300, Training Loss: 0.6938513325680935, Validation Loss: 0.6930727752894433Epoch 120/300, Training Loss: 0.6938512904971486, Validation Loss: 0.6930726829514746Epoch 121/300, Training Loss: 0.6938512412776825, Validation Loss: 0.6930725849025621Epoch 122/300, Training Loss: 0.6938511845260765, Validation Loss: 0.6930724807380964Epoch 123/300, Training Loss: 0.6938511198275978, Validation Loss: 0.6930723700163829Epoch 124/300, Training Loss: 0.6938510467332433, Validation Loss: 0.6930722522545119Epoch 125/300, Training Loss: 0.6938509647561878, Validation Loss: 0.6930721269236827Epoch 126/300, Training Loss: 0.6938508733677845, Validation Loss: 0.6930719934439049Epoch 127/300, Training Loss: 0.6938507719930582, Validation Loss: 0.693071851177975Epoch 128/300, Training Loss: 0.6938506600056001, Validation Loss: 0.6930716994246168Epoch 129/300, Training Loss: 0.6938505367217908, Validation Loss: 0.6930715374106453Epoch 130/300, Training Loss: 0.6938504013942236, Validation Loss: 0.6930713642820077Epoch 131/300, Training Loss: 0.693850253204221, Validation Loss: 0.6930711790934961Epoch 132/300, Training Loss: 0.6938500912532823, Validation Loss: 0.6930709807969156Epoch 133/300, Training Loss: 0.693849914553287, Validation Loss: 0.6930707682274407Epoch 134/300, Training Loss: 0.6938497220152385, Validation Loss: 0.6930705400878259Epoch 135/300, Training Loss: 0.6938495124362939, Validation Loss: 0.6930702949300949Epoch 136/300, Training Loss: 0.6938492844847809, Validation Loss: 0.6930700311342312Epoch 137/300, Training Loss: 0.6938490366828092, Validation Loss: 0.6930697468832988Epoch 138/300, Training Loss: 0.6938487673860516, Validation Loss: 0.6930694401343105Epoch 139/300, Training Loss: 0.6938484747601368, Validation Loss: 0.6930691085839858Epoch 140/300, Training Loss: 0.6938481567529757, Validation Loss: 0.6930687496283673Epoch 141/300, Training Loss: 0.6938478110622224, Validation Loss: 0.6930683603150095Epoch 142/300, Training Loss: 0.6938474350968395, Validation Loss: 0.6930679372861551Epoch 143/300, Training Loss: 0.693847025931516, Validation Loss: 0.693067476710924Epoch 144/300, Training Loss: 0.6938465802523813, Validation Loss: 0.693066974204035Epoch 145/300, Training Loss: 0.693846094292046, Validation Loss: 0.6930664247279602Epoch 146/300, Training Loss: 0.6938455637515044, Validation Loss: 0.6930658224745654Epoch 147/300, Training Loss: 0.6938449837057659, Validation Loss: 0.6930651607212257Epoch 148/300, Training Loss: 0.6938443484892224, Validation Loss: 0.6930644316550085Epoch 149/300, Training Loss: 0.6938436515556272, Validation Loss: 0.6930636261566383Epoch 150/300, Training Loss: 0.6938428853060503, Validation Loss: 0.6930627335335016Epoch 151/300, Training Loss: 0.6938420408761811, Validation Loss: 0.6930617411876593Epoch 152/300, Training Loss: 0.6938411078716499, Validation Loss: 0.6930606342003304Epoch 153/300, Training Loss: 0.6938400740363743, Validation Loss: 0.6930593948082739Epoch 154/300, Training Loss: 0.6938389248339406, Validation Loss: 0.6930580017390959Epoch 155/300, Training Loss: 0.6938376429150652, Validation Loss: 0.6930564293608928Epoch 156/300, Training Loss: 0.6938362074345368, Validation Loss: 0.6930546465853408Epoch 157/300, Training Loss: 0.6938345931672925, Validation Loss: 0.693052615440177Epoch 158/300, Training Loss: 0.6938327693537624, Validation Loss: 0.6930502891937635Epoch 159/300, Training Loss: 0.6938306981762408, Validation Loss: 0.6930476098660422Epoch 160/300, Training Loss: 0.6938283327264954, Validation Loss: 0.6930445048888378Epoch 161/300, Training Loss: 0.6938256142629053, Validation Loss: 0.6930408825716599Epoch 162/300, Training Loss: 0.6938224684619543, Validation Loss: 0.6930366258667735Epoch 163/300, Training Loss: 0.6938188002250608, Validation Loss: 0.6930315836760967Epoch 164/300, Training Loss: 0.6938144863767153, Validation Loss: 0.6930255585466079Epoch 165/300, Training Loss: 0.6938093652307357, Validation Loss: 0.6930182889643237Epoch 166/300, Training Loss: 0.6938032214156047, Validation Loss: 0.6930094234101974Epoch 167/300, Training Loss: 0.6937957633718882, Validation Loss: 0.6929984815780561Epoch 168/300, Training Loss: 0.6937865892592014, Validation Loss: 0.6929847951038417Epoch 169/300, Training Loss: 0.6937751340564579, Validation Loss: 0.6929674147176419Epoch 170/300, Training Loss: 0.6937605852645007, Validation Loss: 0.6929449607121516Epoch 171/300, Training Loss: 0.6937417444869682, Validation Loss: 0.6929153744684894Epoch 172/300, Training Loss: 0.6937167922819742, Validation Loss: 0.692875490597886Epoch 173/300, Training Loss: 0.6936828728538897, Validation Loss: 0.6928202694238725Epoch 174/300, Training Loss: 0.693635326876399, Validation Loss: 0.6927413532223137Epoch 175/300, Training Loss: 0.6935661980476764, Validation Loss: 0.692624194764426Epoch 176/300, Training Loss: 0.6934611398066696, Validation Loss: 0.6924419549926608Epoch 177/300, Training Loss: 0.6932925129404816, Validation Loss: 0.6921414538046162Epoch 178/300, Training Loss: 0.6930025160994137, Validation Loss: 0.6916074825215339Epoch 179/300, Training Loss: 0.6924570101917241, Validation Loss: 0.6905602874786696Epoch 180/300, Training Loss: 0.6912996076411729, Validation Loss: 0.6882114037480644Epoch 181/300, Training Loss: 0.6883983608829737, Validation Loss: 0.6818601432623843Epoch 182/300, Training Loss: 0.6792547234485795, Validation Loss: 0.659909468042662Epoch 183/300, Training Loss: 0.643032982654397, Validation Loss: 0.5717511505576006Epoch 184/300, Training Loss: 0.5314661816115533, Validation Loss: 0.39292181820695593Epoch 185/300, Training Loss: 0.423159296266635, Validation Loss: 0.314683617248503Epoch 186/300, Training Loss: 0.3783054544219871, Validation Loss: 0.2871604459435192Epoch 187/300, Training Loss: 0.36251717849948617, Validation Loss: 0.27690553877115975Epoch 188/300, Training Loss: 0.3614654704278816, Validation Loss: 0.2742587298589868Epoch 189/300, Training Loss: 0.36589723961685916, Validation Loss: 0.2745659459433656Epoch 190/300, Training Loss: 0.3720574919743688, Validation Loss: 0.27618166499269Epoch 191/300, Training Loss: 0.3783423227552397, Validation Loss: 0.27821715166475747Epoch 192/300, Training Loss: 0.3842857340645633, Validation Loss: 0.2803063902388837Epoch 193/300, Training Loss: 0.3899692036545893, Validation Loss: 0.28235417187491685Epoch 194/300, Training Loss: 0.3955992718243272, Validation Loss: 0.2843604299853861Epoch 195/300, Training Loss: 0.4013795590507012, Validation Loss: 0.2863519316668173Epoch 196/300, Training Loss: 0.4074929669573253, Validation Loss: 0.28836136952761915Epoch 197/300, Training Loss: 0.4141115325418693, Validation Loss: 0.2904241426504767Epoch 198/300, Training Loss: 0.4214171758102318, Validation Loss: 0.29258424715042697Epoch 199/300, Training Loss: 0.4296457739126633, Validation Loss: 0.29491178668484097Epoch 200/300, Training Loss: 0.4392090628344346, Validation Loss: 0.2975468270632257Epoch 201/300, Training Loss: 0.4511204269794469, Validation Loss: 0.3008091213968359Epoch 202/300, Training Loss: 0.46795795976378607, Validation Loss: 0.3053650941949252Epoch 203/300, Training Loss: 0.49331007905939767, Validation Loss: 0.3121163591391754Epoch 204/300, Training Loss: 0.5307354437506959, Validation Loss: 0.32196814386657735Epoch 205/300, Training Loss: 0.5816236555880432, Validation Loss: 0.33566248961587436Epoch 206/300, Training Loss: 0.637539984682362, Validation Loss: 0.352927493371635Epoch 207/300, Training Loss: 0.678698494763442, Validation Loss: 0.37670899220969184Epoch 208/300, Training Loss: 0.7254394756023625, Validation Loss: 0.40898507765683184Epoch 209/300, Training Loss: 0.7729261986049485, Validation Loss: 0.4429320364168313Epoch 210/300, Training Loss: 1.0868802077939734, Validation Loss: 0.5287820949289722Epoch 211/300, Training Loss: 1.4577107810198942, Validation Loss: 0.6684929054584854Epoch 212/300, Training Loss: nan, Validation Loss: nanEpoch 213/300, Training Loss: nan, Validation Loss: nanEpoch 214/300, Training Loss: nan, Validation Loss: nanEpoch 215/300, Training Loss: nan, Validation Loss: nanEpoch 216/300, Training Loss: nan, Validation Loss: nanEpoch 217/300, Training Loss: nan, Validation Loss: nanEpoch 218/300, Training Loss: nan, Validation Loss: nanEpoch 219/300, Training Loss: nan, Validation Loss: nanEpoch 220/300, Training Loss: nan, Validation Loss: nanEpoch 221/300, Training Loss: nan, Validation Loss: nanEpoch 222/300, Training Loss: nan, Validation Loss: nanEpoch 223/300, Training Loss: nan, Validation Loss: nanEpoch 224/300, Training Loss: nan, Validation Loss: nanEpoch 225/300, Training Loss: nan, Validation Loss: nanEpoch 226/300, Training Loss: nan, Validation Loss: nanEpoch 227/300, Training Loss: nan, Validation Loss: nanEpoch 228/300, Training Loss: nan, Validation Loss: nanEpoch 229/300, Training Loss: nan, Validation Loss: nanEpoch 230/300, Training Loss: nan, Validation Loss: nanEpoch 231/300, Training Loss: nan, Validation Loss: nanEpoch 232/300, Training Loss: nan, Validation Loss: nanEpoch 233/300, Training Loss: nan, Validation Loss: nanEpoch 234/300, Training Loss: nan, Validation Loss: nanEpoch 235/300, Training Loss: nan, Validation Loss: nanEpoch 236/300, Training Loss: nan, Validation Loss: nanEpoch 237/300, Training Loss: nan, Validation Loss: nanEpoch 238/300, Training Loss: nan, Validation Loss: nanEpoch 239/300, Training Loss: nan, Validation Loss: nanEpoch 240/300, Training Loss: nan, Validation Loss: nanEpoch 241/300, Training Loss: nan, Validation Loss: nanEpoch 242/300, Training Loss: nan, Validation Loss: nanEpoch 243/300, Training Loss: nan, Validation Loss: nanEpoch 244/300, Training Loss: nan, Validation Loss: nanEpoch 245/300, Training Loss: nan, Validation Loss: nanEpoch 246/300, Training Loss: nan, Validation Loss: nanEpoch 247/300, Training Loss: nan, Validation Loss: nanEpoch 248/300, Training Loss: nan, Validation Loss: nanEpoch 249/300, Training Loss: nan, Validation Loss: nanEpoch 250/300, Training Loss: nan, Validation Loss: nanEpoch 251/300, Training Loss: nan, Validation Loss: nanEpoch 252/300, Training Loss: nan, Validation Loss: nanEpoch 253/300, Training Loss: nan, Validation Loss: nanEpoch 254/300, Training Loss: nan, Validation Loss: nanEpoch 255/300, Training Loss: nan, Validation Loss: nanEpoch 256/300, Training Loss: nan, Validation Loss: nanEpoch 257/300, Training Loss: nan, Validation Loss: nanEpoch 258/300, Training Loss: nan, Validation Loss: nanEpoch 259/300, Training Loss: nan, Validation Loss: nanEpoch 260/300, Training Loss: nan, Validation Loss: nanEpoch 261/300, Training Loss: nan, Validation Loss: nanEpoch 262/300, Training Loss: nan, Validation Loss: nanEpoch 263/300, Training Loss: nan, Validation Loss: nanEpoch 264/300, Training Loss: nan, Validation Loss: nanEpoch 265/300, Training Loss: nan, Validation Loss: nanEpoch 266/300, Training Loss: nan, Validation Loss: nanEpoch 267/300, Training Loss: nan, Validation Loss: nanEpoch 268/300, Training Loss: nan, Validation Loss: nanEpoch 269/300, Training Loss: nan, Validation Loss: nanEpoch 270/300, Training Loss: nan, Validation Loss: nanEpoch 271/300, Training Loss: nan, Validation Loss: nanEpoch 272/300, Training Loss: nan, Validation Loss: nanEpoch 273/300, Training Loss: nan, Validation Loss: nanEpoch 274/300, Training Loss: nan, Validation Loss: nanEpoch 275/300, Training Loss: nan, Validation Loss: nanEpoch 276/300, Training Loss: nan, Validation Loss: nanEpoch 277/300, Training Loss: nan, Validation Loss: nanEpoch 278/300, Training Loss: nan, Validation Loss: nanEpoch 279/300, Training Loss: nan, Validation Loss: nanEpoch 280/300, Training Loss: nan, Validation Loss: nanEpoch 281/300, Training Loss: nan, Validation Loss: nanEpoch 282/300, Training Loss: nan, Validation Loss: nanEpoch 283/300, Training Loss: nan, Validation Loss: nanEpoch 284/300, Training Loss: nan, Validation Loss: nanEpoch 285/300, Training Loss: nan, Validation Loss: nanEpoch 286/300, Training Loss: nan, Validation Loss: nanEpoch 287/300, Training Loss: nan, Validation Loss: nanEpoch 288/300, Training Loss: nan, Validation Loss: nanEpoch 289/300, Training Loss: nan, Validation Loss: nanEpoch 290/300, Training Loss: nan, Validation Loss: nanEpoch 291/300, Training Loss: nan, Validation Loss: nanEpoch 292/300, Training Loss: nan, Validation Loss: nanEpoch 293/300, Training Loss: nan, Validation Loss: nanEpoch 294/300, Training Loss: nan, Validation Loss: nanEpoch 295/300, Training Loss: nan, Validation Loss: nanEpoch 296/300, Training Loss: nan, Validation Loss: nanEpoch 297/300, Training Loss: nan, Validation Loss: nanEpoch 298/300, Training Loss: nan, Validation Loss: nanEpoch 299/300, Training Loss: nan, Validation Loss: nanEpoch 300/300, Training Loss: nan, Validation Loss: nanEpoch 1/500, Training Loss: 0.6939324715487849, Validation Loss: 0.6928531494317351Epoch 2/500, Training Loss: 0.6937815176843226, Validation Loss: 0.6929592553688283Epoch 3/500, Training Loss: 0.6937997511296365, Validation Loss: 0.6930245498149556Epoch 4/500, Training Loss: 0.6938178599204596, Validation Loss: 0.6930531109056101Epoch 5/500, Training Loss: 0.6938265258821185, Validation Loss: 0.6930645703140202Epoch 6/500, Training Loss: 0.6938302177917741, Validation Loss: 0.693069051721375Epoch 7/500, Training Loss: 0.6938318100406974, Validation Loss: 0.6930708034228721Epoch 8/500, Training Loss: 0.6938325704236068, Validation Loss: 0.6930715031951584Epoch 9/500, Training Loss: 0.6938330078303329, Validation Loss: 0.6930717996516083Epoch 10/500, Training Loss: 0.693833320164808, Validation Loss: 0.6930719415412354Epoch 11/500, Training Loss: 0.6938335835031405, Validation Loss: 0.6930720239633467Epoch 12/500, Training Loss: 0.6938338269624943, Validation Loss: 0.6930720832282911Epoch 13/500, Training Loss: 0.6938340616701913, Validation Loss: 0.6930721331915773Epoch 14/500, Training Loss: 0.6938342918781137, Validation Loss: 0.6930721791383125Epoch 15/500, Training Loss: 0.6938345192127768, Validation Loss: 0.6930722230835474Epoch 16/500, Training Loss: 0.6938347442982915, Validation Loss: 0.6930722657946486Epoch 17/500, Training Loss: 0.6938349673757165, Validation Loss: 0.6930723075626884Epoch 18/500, Training Loss: 0.6938351885393106, Validation Loss: 0.6930723484965975Epoch 19/500, Training Loss: 0.6938354078266231, Validation Loss: 0.693072388635292Epoch 20/500, Training Loss: 0.6938356252528284, Validation Loss: 0.6930724279903877Epoch 21/500, Training Loss: 0.6938358408237841, Validation Loss: 0.6930724665624515Epoch 22/500, Training Loss: 0.6938360545409938, Validation Loss: 0.6930725043471584Epoch 23/500, Training Loss: 0.6938362664034582, Validation Loss: 0.6930725413376028Epoch 24/500, Training Loss: 0.6938364764083552, Validation Loss: 0.6930725775251351Epoch 25/500, Training Loss: 0.6938366845512574, Validation Loss: 0.6930726128996383Epoch 26/500, Training Loss: 0.693836890826175, Validation Loss: 0.6930726474495807Epoch 27/500, Training Loss: 0.6938370952255302, Validation Loss: 0.6930726811619856Epoch 28/500, Training Loss: 0.6938372977400917, Validation Loss: 0.6930727140223535Epoch 29/500, Training Loss: 0.6938374983588965, Validation Loss: 0.6930727460145738Epoch 30/500, Training Loss: 0.693837697069158, Validation Loss: 0.6930727771208135Epoch 31/500, Training Loss: 0.6938378938561627, Validation Loss: 0.6930728073213965Epoch 32/500, Training Loss: 0.6938380887031583, Validation Loss: 0.693072836594671Epoch 33/500, Training Loss: 0.6938382815912205, Validation Loss: 0.6930728649168615Epoch 34/500, Training Loss: 0.6938384724991195, Validation Loss: 0.693072892261905Epoch 35/500, Training Loss: 0.6938386614031639, Validation Loss: 0.6930729186012731Epoch 36/500, Training Loss: 0.6938388482770281, Validation Loss: 0.6930729439037738Epoch 37/500, Training Loss: 0.6938390330915682, Validation Loss: 0.6930729681353349Epoch 38/500, Training Loss: 0.6938392158146127, Validation Loss: 0.6930729912587611Epoch 39/500, Training Loss: 0.693839396410736, Validation Loss: 0.6930730132334696Epoch 40/500, Training Loss: 0.6938395748410044, Validation Loss: 0.6930730340151966Epoch 41/500, Training Loss: 0.6938397510626968, Validation Loss: 0.6930730535556695Epoch 42/500, Training Loss: 0.6938399250289986, Validation Loss: 0.693073071802248Epoch 43/500, Training Loss: 0.6938400966886584, Validation Loss: 0.6930730886975244Epoch 44/500, Training Loss: 0.6938402659856143, Validation Loss: 0.6930731041788771Epoch 45/500, Training Loss: 0.6938404328585639, Validation Loss: 0.6930731181779778Epoch 46/500, Training Loss: 0.6938405972405166, Validation Loss: 0.6930731306202407Epoch 47/500, Training Loss: 0.6938407590582575, Validation Loss: 0.6930731414242095Epoch 48/500, Training Loss: 0.6938409182317876, Validation Loss: 0.6930731505008734Epoch 49/500, Training Loss: 0.6938410746736753, Validation Loss: 0.6930731577528999Epoch 50/500, Training Loss: 0.693841228288343, Validation Loss: 0.6930731630737799Epoch 51/500, Training Loss: 0.6938413789712707, Validation Loss: 0.6930731663468681Epoch 52/500, Training Loss: 0.6938415266081014, Validation Loss: 0.6930731674443024Epoch 53/500, Training Loss: 0.6938416710736404, Validation Loss: 0.6930731662257952Epoch 54/500, Training Loss: 0.6938418122307384, Validation Loss: 0.693073162537265Epoch 55/500, Training Loss: 0.6938419499290225, Validation Loss: 0.6930731562092975Epoch 56/500, Training Loss: 0.6938420840034827, Validation Loss: 0.6930731470554052Epoch 57/500, Training Loss: 0.693842214272866, Validation Loss: 0.693073134870055Epoch 58/500, Training Loss: 0.6938423405378735, Validation Loss: 0.6930731194264326Epoch 59/500, Training Loss: 0.6938424625791073, Validation Loss: 0.6930731004739016Epoch 60/500, Training Loss: 0.6938425801547495, Validation Loss: 0.6930730777351092Epoch 61/500, Training Loss: 0.693842692997926, Validation Loss: 0.6930730509026835Epoch 62/500, Training Loss: 0.6938428008136998, Validation Loss: 0.6930730196354571Epoch 63/500, Training Loss: 0.693842903275637, Validation Loss: 0.6930729835541385Epoch 64/500, Training Loss: 0.6938430000218927, Validation Loss: 0.693072942236339Epoch 65/500, Training Loss: 0.6938430906507008, Validation Loss: 0.6930728952108474Epoch 66/500, Training Loss: 0.693843174715213, Validation Loss: 0.6930728419510227Epoch 67/500, Training Loss: 0.6938432517175389, Validation Loss: 0.6930727818671438Epoch 68/500, Training Loss: 0.6938433211018625, Validation Loss: 0.6930727142975385Epoch 69/500, Training Loss: 0.6938433822464724, Validation Loss: 0.6930726384982581Epoch 70/500, Training Loss: 0.6938434344545017, Validation Loss: 0.693072553631034Epoch 71/500, Training Loss: 0.6938434769431442, Validation Loss: 0.6930724587491823Epoch 72/500, Training Loss: 0.6938435088310442, Validation Loss: 0.6930723527810583Epoch 73/500, Training Loss: 0.6938435291235245, Validation Loss: 0.6930722345105781Epoch 74/500, Training Loss: 0.6938435366952104, Validation Loss: 0.6930721025541985Epoch 75/500, Training Loss: 0.6938435302695234, Validation Loss: 0.6930719553336341Epoch 76/500, Training Loss: 0.6938435083943958, Validation Loss: 0.6930717910433863Epoch 77/500, Training Loss: 0.6938434694134069, Validation Loss: 0.6930716076119671Epoch 78/500, Training Loss: 0.6938434114313301, Validation Loss: 0.6930714026554032Epoch 79/500, Training Loss: 0.6938433322728612, Validation Loss: 0.6930711734212502Epoch 80/500, Training Loss: 0.6938432294329535, Validation Loss: 0.6930709167208982Epoch 81/500, Training Loss: 0.6938431000167883, Validation Loss: 0.6930706288473333Epoch 82/500, Training Loss: 0.6938429406668771, Validation Loss: 0.6930703054747741Epoch 83/500, Training Loss: 0.6938427474741147, Validation Loss: 0.6930699415355485Epoch 84/500, Training Loss: 0.6938425158686674, Validation Loss: 0.6930695310682855Epoch 85/500, Training Loss: 0.6938422404854105, Validation Loss: 0.693069067029663Epoch 86/500, Training Loss: 0.6938419149970293, Validation Loss: 0.6930685410596321Epoch 87/500, Training Loss: 0.6938415319057422, Validation Loss: 0.693067943186772Epoch 88/500, Training Loss: 0.6938410822817388, Validation Loss: 0.6930672614561104Epoch 89/500, Training Loss: 0.6938405554324633, Validation Loss: 0.6930664814557203Epoch 90/500, Training Loss: 0.6938399384814483, Validation Loss: 0.6930655857101333Epoch 91/500, Training Loss: 0.6938392158278543, Validation Loss: 0.6930645528969764Epoch 92/500, Training Loss: 0.6938383684472804, Validation Loss: 0.6930633568268564Epoch 93/500, Training Loss: 0.6938373729793372, Validation Loss: 0.6930619651030386Epoch 94/500, Training Loss: 0.6938362005258787, Validation Loss: 0.6930603373435287Epoch 95/500, Training Loss: 0.6938348150523392, Validation Loss: 0.6930584227984113Epoch 96/500, Training Loss: 0.6938331712382491, Validation Loss: 0.6930561571213608Epoch 97/500, Training Loss: 0.6938312115537174, Validation Loss: 0.6930534579427172Epoch 98/500, Training Loss: 0.6938288622333804, Validation Loss: 0.6930502187206086Epoch 99/500, Training Loss: 0.6938260276567246, Validation Loss: 0.6930463000800442Epoch 100/500, Training Loss: 0.6938225823880269, Validation Loss: 0.6930415174263735Epoch 101/500, Training Loss: 0.6938183597189793, Validation Loss: 0.6930356229324798Epoch 102/500, Training Loss: 0.693813134884837, Validation Loss: 0.6930282788594716Epoch 103/500, Training Loss: 0.6938065999966454, Validation Loss: 0.693019017232968Epoch 104/500, Training Loss: 0.6937983257883713, Validation Loss: 0.6930071775114993Epoch 105/500, Training Loss: 0.6937877018310186, Validation Loss: 0.6929918077864172Epoch 106/500, Training Loss: 0.6937738405531877, Validation Loss: 0.6929715036946256Epoch 107/500, Training Loss: 0.6937554184206756, Validation Loss: 0.6929441372468684Epoch 108/500, Training Loss: 0.6937304039151448, Validation Loss: 0.6929063833706766Epoch 109/500, Training Loss: 0.6936955728042912, Validation Loss: 0.6928528577234684Epoch 110/500, Training Loss: 0.693645603709904, Validation Loss: 0.6927744675781601Epoch 111/500, Training Loss: 0.6935712967331118, Validation Loss: 0.6926550690220512Epoch 112/500, Training Loss: 0.6934558306129779, Validation Loss: 0.6924642018842031Epoch 113/500, Training Loss: 0.6932662557925968, Validation Loss: 0.6921398946907147Epoch 114/500, Training Loss: 0.6929321760346716, Validation Loss: 0.6915433835506235Epoch 115/500, Training Loss: 0.6922852338031654, Validation Loss: 0.690322326493832Epoch 116/500, Training Loss: 0.6908567068096106, Validation Loss: 0.6874151636273014Epoch 117/500, Training Loss: 0.6870324506683514, Validation Loss: 0.6787561304066176Epoch 118/500, Training Loss: 0.6733747897970479, Validation Loss: 0.6431277392596677Epoch 119/500, Training Loss: 0.6110110567407943, Validation Loss: 0.4848701408677222Epoch 120/500, Training Loss: 0.4736566724036953, Validation Loss: 0.3173992450004039Epoch 121/500, Training Loss: 0.40035097099555095, Validation Loss: 0.277943612659349Epoch 122/500, Training Loss: 0.37144561994730846, Validation Loss: 0.2685723547007509Epoch 123/500, Training Loss: 0.35816605769835197, Validation Loss: 0.2667899677308102Epoch 124/500, Training Loss: 0.3562649596182335, Validation Loss: 0.27018068753775637Epoch 125/500, Training Loss: 0.36053500369403635, Validation Loss: 0.2767783079159651Epoch 126/500, Training Loss: 0.36692589129035397, Validation Loss: 0.28474769492022Epoch 127/500, Training Loss: 0.37376723243254534, Validation Loss: 0.29309477452875865Epoch 128/500, Training Loss: 0.38061292881888964, Validation Loss: 0.30141525179216805Epoch 129/500, Training Loss: 0.38737481387200745, Validation Loss: 0.30955106054466974Epoch 130/500, Training Loss: 0.39402621353471534, Validation Loss: 0.3174382050314156Epoch 131/500, Training Loss: 0.4005369594932785, Validation Loss: 0.32505402353607266Epoch 132/500, Training Loss: 0.4068686061406671, Validation Loss: 0.3323975387695557Epoch 133/500, Training Loss: 0.41298091031603545, Validation Loss: 0.33948104833569126Epoch 134/500, Training Loss: 0.4188385554688536, Validation Loss: 0.34632579811293734Epoch 135/500, Training Loss: 0.424417640812468, Validation Loss: 0.3529559489443036Epoch 136/500, Training Loss: 0.4297118856918716, Validation Loss: 0.3593860140259882Epoch 137/500, Training Loss: 0.4347312393054944, Validation Loss: 0.3656074712234571Epoch 138/500, Training Loss: 0.4394903040048971, Validation Loss: 0.37159753263435563Epoch 139/500, Training Loss: 0.4440129471373336, Validation Loss: 0.37734937136587293Epoch 140/500, Training Loss: 0.44830691167515546, Validation Loss: 0.3828499972116012Epoch 141/500, Training Loss: 0.4520979213014007, Validation Loss: 0.3880493117632257Epoch 142/500, Training Loss: 0.4546760803702953, Validation Loss: 0.39302750974540196Epoch 143/500, Training Loss: 0.4575022997960237, Validation Loss: 0.3977377776947061Epoch 144/500, Training Loss: 0.4619790123595696, Validation Loss: 0.40237463233502035Epoch 145/500, Training Loss: 0.4647093091985665, Validation Loss: 0.40604546852671924Epoch 146/500, Training Loss: 0.4625871802105001, Validation Loss: 0.41165902375991414Epoch 147/500, Training Loss: 0.47149666038435045, Validation Loss: 0.41553732619708433Epoch 148/500, Training Loss: 0.4656776144990106, Validation Loss: 0.4222005628193415Epoch 149/500, Training Loss: 0.4715643029744805, Validation Loss: 0.4290666169852961Epoch 150/500, Training Loss: 0.47078927171667895, Validation Loss: 0.43271266195597996Epoch 151/500, Training Loss: 0.47426274401003976, Validation Loss: 0.43057776978286394Epoch 152/500, Training Loss: 0.4695005493867931, Validation Loss: 0.42992693018889155Epoch 153/500, Training Loss: 0.47080288797655595, Validation Loss: 0.42560627395359896Epoch 154/500, Training Loss: 0.46963464123351645, Validation Loss: 0.41751997140232255Epoch 155/500, Training Loss: 0.465769017150815, Validation Loss: 0.40633473964027456Epoch 156/500, Training Loss: 0.45659192502794876, Validation Loss: 0.38244129358841367Epoch 157/500, Training Loss: 0.450453291862609, Validation Loss: 0.3697579871627842Epoch 158/500, Training Loss: 0.42767006038292693, Validation Loss: 0.36441683554397125Epoch 159/500, Training Loss: 0.4278841873322067, Validation Loss: 0.36831396002997147Epoch 160/500, Training Loss: 0.4350889747444578, Validation Loss: 0.3416594383176366Epoch 161/500, Training Loss: 0.4268789369894761, Validation Loss: 0.3269322416128602Epoch 162/500, Training Loss: 0.42707259052812957, Validation Loss: 0.3230123363731706Epoch 163/500, Training Loss: 0.4319237285809539, Validation Loss: 0.3108841630953002Epoch 164/500, Training Loss: 0.4091725256645967, Validation Loss: 0.30171037007209356Epoch 165/500, Training Loss: 0.403161828576155, Validation Loss: 0.2937193295258006Epoch 166/500, Training Loss: 0.3866199479737715, Validation Loss: 0.2947074402329022Epoch 167/500, Training Loss: 0.37556959737554063, Validation Loss: 0.2928506596264456Epoch 168/500, Training Loss: 0.3734391414877584, Validation Loss: 0.2840541218610041Epoch 169/500, Training Loss: 0.3842686461484527, Validation Loss: 0.2840915036714221Epoch 170/500, Training Loss: 0.3796618037218979, Validation Loss: 0.27428388919876323Epoch 171/500, Training Loss: 0.38509269095541476, Validation Loss: 0.273670198783664Epoch 172/500, Training Loss: 0.37874437544750006, Validation Loss: 0.2462832201520953Epoch 173/500, Training Loss: 0.38206433198609285, Validation Loss: 0.2532454579787918Epoch 174/500, Training Loss: 0.3842874753574858, Validation Loss: 0.25104012167485246Epoch 175/500, Training Loss: 0.37764055615101844, Validation Loss: 0.2347710029616171Epoch 176/500, Training Loss: 0.3716600022752443, Validation Loss: 0.23449375079566517Epoch 177/500, Training Loss: 0.3717906609233694, Validation Loss: 0.23375142424590814Epoch 178/500, Training Loss: 0.37154625170339356, Validation Loss: 0.23330336015308434Epoch 179/500, Training Loss: 0.3713851789542863, Validation Loss: 0.23312435565518555Epoch 180/500, Training Loss: 0.37151749246005883, Validation Loss: 0.23309158804535848Epoch 181/500, Training Loss: 0.3720501074596013, Validation Loss: 0.23310627768982028Epoch 182/500, Training Loss: 0.37278852224661063, Validation Loss: 0.2331364097935602Epoch 183/500, Training Loss: 0.37379590451005273, Validation Loss: 0.23316732832267517Epoch 184/500, Training Loss: 0.3678184421531972, Validation Loss: 0.23277993978251219Epoch 185/500, Training Loss: 0.36531880819326185, Validation Loss: 0.23116978977382452Epoch 186/500, Training Loss: 0.36441497114244265, Validation Loss: 0.2308009337880444Epoch 187/500, Training Loss: 0.36443329433356686, Validation Loss: 0.22648344263595624Epoch 188/500, Training Loss: 0.36124020170265564, Validation Loss: 0.22977648770771877Epoch 189/500, Training Loss: 0.3635533185992381, Validation Loss: 0.22891289911838922Epoch 190/500, Training Loss: 0.3551159323421681, Validation Loss: 0.21177134230501876Epoch 191/500, Training Loss: 0.3619725417844676, Validation Loss: 0.22024757019031807Epoch 192/500, Training Loss: 0.3560304905155985, Validation Loss: 0.20140589404075Epoch 193/500, Training Loss: 0.36037372485666774, Validation Loss: 0.20252308787339196Epoch 194/500, Training Loss: 0.36342206995994036, Validation Loss: 0.20127823016678298Epoch 195/500, Training Loss: 0.35375068818677824, Validation Loss: 0.19836669535005097Epoch 196/500, Training Loss: 0.36079592717657827, Validation Loss: 0.19890592929652065Epoch 197/500, Training Loss: 0.35534045115403484, Validation Loss: 0.19881134020840638Epoch 198/500, Training Loss: 0.35554308370929444, Validation Loss: 0.1983279963895001Epoch 199/500, Training Loss: 0.3577063298351421, Validation Loss: 0.19883419303932987Epoch 200/500, Training Loss: 0.358459589953233, Validation Loss: 0.19841381324713078Epoch 201/500, Training Loss: 0.3568743404319036, Validation Loss: 0.1993420293979677Epoch 202/500, Training Loss: 0.35182218483922206, Validation Loss: 0.20103812350732203Epoch 203/500, Training Loss: 0.36122115493140555, Validation Loss: 0.2004749865176585Epoch 204/500, Training Loss: 0.3554905422702459, Validation Loss: 0.2012040081498317Epoch 205/500, Training Loss: 0.36249650655827864, Validation Loss: 0.20115446245150342Epoch 206/500, Training Loss: 0.35434711958401827, Validation Loss: 0.2018980301054289Epoch 207/500, Training Loss: 0.3607217808275742, Validation Loss: 0.19979797923555276Epoch 208/500, Training Loss: 0.3706332949778392, Validation Loss: 0.20116957745207126Epoch 209/500, Training Loss: 0.36942078729127686, Validation Loss: 0.20168049537127283Epoch 210/500, Training Loss: 0.366555958505106, Validation Loss: 0.2014595118749581Epoch 211/500, Training Loss: 0.37352476724437933, Validation Loss: 0.20156900070849143Epoch 212/500, Training Loss: 0.35852386236932093, Validation Loss: 0.20251265642506983Epoch 213/500, Training Loss: 0.3674915917827323, Validation Loss: 0.20222680672921692Epoch 214/500, Training Loss: 0.3607497127733877, Validation Loss: 0.15932337147088Epoch 215/500, Training Loss: 0.3795959457041952, Validation Loss: 0.20255901620112393Epoch 216/500, Training Loss: 0.35732910599858025, Validation Loss: 0.158028683815184Epoch 217/500, Training Loss: 0.3701139263927425, Validation Loss: 0.15798484854792363Epoch 218/500, Training Loss: 0.37046895584893685, Validation Loss: 0.15795957428866592Epoch 219/500, Training Loss: 0.3723141028861949, Validation Loss: 0.15799803929700076Epoch 220/500, Training Loss: 0.37661904883728436, Validation Loss: 0.15813560429432252Epoch 221/500, Training Loss: 0.37603364234813624, Validation Loss: 0.1583626094220999Epoch 222/500, Training Loss: 0.3599946207710859, Validation Loss: 0.20379439420789933Epoch 223/500, Training Loss: 0.3809298497042677, Validation Loss: 0.20364718317120292Epoch 224/500, Training Loss: 0.3756965368887626, Validation Loss: 0.20349758395331632Epoch 225/500, Training Loss: 0.3744705716376558, Validation Loss: 0.1672236471874602Epoch 226/500, Training Loss: 0.3775351485885022, Validation Loss: 0.18399975234466015Epoch 227/500, Training Loss: 0.37378295034412223, Validation Loss: 0.16326688175084286Epoch 228/500, Training Loss: 0.3728228628224514, Validation Loss: 0.16419761950130374Epoch 229/500, Training Loss: 0.3753777435733812, Validation Loss: 0.16394489798640652Epoch 230/500, Training Loss: 0.384237866439109, Validation Loss: 0.16314860554439586Epoch 231/500, Training Loss: 0.3784783638148934, Validation Loss: 0.16360765086609907Epoch 232/500, Training Loss: 0.3981033056538056, Validation Loss: 0.16393768580364262Epoch 233/500, Training Loss: 0.3899565998304074, Validation Loss: 0.10946575194071559Epoch 234/500, Training Loss: 0.40979702534678425, Validation Loss: 0.16530777746607056Epoch 235/500, Training Loss: 0.3963683763563531, Validation Loss: 0.11709320353553199Epoch 236/500, Training Loss: 0.42194311430013054, Validation Loss: 0.12498868478354748Epoch 237/500, Training Loss: 0.47266318271649527, Validation Loss: 0.14692322423070692Epoch 238/500, Training Loss: 0.4188073990498052, Validation Loss: 0.1467804203859821Epoch 239/500, Training Loss: 0.42008939169878606, Validation Loss: 0.14375793722338376Epoch 240/500, Training Loss: 0.44164216413289087, Validation Loss: 0.12058382006638732Epoch 241/500, Training Loss: 0.5017322658443957, Validation Loss: 0.13102337420983262Epoch 242/500, Training Loss: 0.6413221816097037, Validation Loss: 0.17076065399802703Epoch 243/500, Training Loss: 0.4408766802959462, Validation Loss: 0.16605583111702546Epoch 244/500, Training Loss: 0.45009347905938535, Validation Loss: 0.16382365940481633Epoch 245/500, Training Loss: 0.7425092753348743, Validation Loss: 0.16479362347208895Epoch 246/500, Training Loss: 0.5431032441406755, Validation Loss: 0.20624618964475522Epoch 247/500, Training Loss: 0.4576888282638142, Validation Loss: 0.20585171090136306Epoch 248/500, Training Loss: 0.5512311121553312, Validation Loss: 0.19880076710282232Epoch 249/500, Training Loss: 0.575146685088921, Validation Loss: 0.3340505885872663Epoch 250/500, Training Loss: 0.7907566236909929, Validation Loss: 0.21810568437028746Epoch 251/500, Training Loss: 0.6796962243223882, Validation Loss: 0.23422040166504451Epoch 252/500, Training Loss: 0.570624530415114, Validation Loss: 0.1836411865122896Epoch 253/500, Training Loss: 0.6455521244388341, Validation Loss: 0.3341120180969982Epoch 254/500, Training Loss: 1.1591869130245078, Validation Loss: 0.33652754737214774Epoch 255/500, Training Loss: 0.7899362363184701, Validation Loss: 0.2596949838903642Epoch 256/500, Training Loss: 0.6784193589993381, Validation Loss: 0.19898400793630347Epoch 257/500, Training Loss: 1.0982135602008183, Validation Loss: 0.3099376382403701Epoch 258/500, Training Loss: 0.948003195784325, Validation Loss: 0.2556242837431584Epoch 259/500, Training Loss: 0.7027735049172693, Validation Loss: 0.15327995670437827Epoch 260/500, Training Loss: 0.5255343610779291, Validation Loss: 0.1842764248048371Epoch 261/500, Training Loss: 0.5513907403401368, Validation Loss: 0.17130012982802043Epoch 262/500, Training Loss: 0.8382909909275847, Validation Loss: 0.2178264762711017Epoch 263/500, Training Loss: 0.500765730595941, Validation Loss: 0.17636745667673887Epoch 264/500, Training Loss: 0.5910708838280965, Validation Loss: 0.17171675025243233Epoch 265/500, Training Loss: 0.6006566573621629, Validation Loss: 0.17877402992012728Epoch 266/500, Training Loss: 0.9169167988387203, Validation Loss: 0.2692032615929457Epoch 267/500, Training Loss: 0.7194849560135057, Validation Loss: 0.12985358482625764Epoch 268/500, Training Loss: 0.7074919605858045, Validation Loss: 0.9791445635143173Epoch 269/500, Training Loss: 1.5670909414260663, Validation Loss: 0.8106021021136174Epoch 270/500, Training Loss: 1.796341469535855, Validation Loss: 0.8814045612021154Epoch 271/500, Training Loss: 2.2541861191222496, Validation Loss: 1.2063383428534689Epoch 272/500, Training Loss: 2.021191389859867, Validation Loss: 1.1880848327418605Epoch 273/500, Training Loss: 2.1826746680321065, Validation Loss: 1.3440926780841778Epoch 274/500, Training Loss: 2.065258643924961, Validation Loss: 1.4602785879771052Epoch 275/500, Training Loss: 4.760001090070775, Validation Loss: 2.901897174737971Epoch 276/500, Training Loss: 5.11294557448093, Validation Loss: 3.732251896354653Epoch 277/500, Training Loss: 4.324578815772369, Validation Loss: 3.8811512900307727Epoch 278/500, Training Loss: 3.3139072157459233, Validation Loss: 1.3475941474744668Epoch 279/500, Training Loss: 3.6977660994149693, Validation Loss: 1.031297996557186Epoch 280/500, Training Loss: 3.9405840362519937, Validation Loss: 2.6487653244973Epoch 281/500, Training Loss: 4.013096116342302, Validation Loss: 3.2812103305674007Epoch 282/500, Training Loss: nan, Validation Loss: 230.21983935895673Epoch 283/500, Training Loss: nan, Validation Loss: nanEpoch 284/500, Training Loss: nan, Validation Loss: nanEpoch 285/500, Training Loss: nan, Validation Loss: nanEpoch 286/500, Training Loss: nan, Validation Loss: nanEpoch 287/500, Training Loss: nan, Validation Loss: nanEpoch 288/500, Training Loss: nan, Validation Loss: nanEpoch 289/500, Training Loss: nan, Validation Loss: nanEpoch 290/500, Training Loss: nan, Validation Loss: nanEpoch 291/500, Training Loss: nan, Validation Loss: nanEpoch 292/500, Training Loss: nan, Validation Loss: nanEpoch 293/500, Training Loss: nan, Validation Loss: nanEpoch 294/500, Training Loss: nan, Validation Loss: nanEpoch 295/500, Training Loss: nan, Validation Loss: nanEpoch 296/500, Training Loss: nan, Validation Loss: nanEpoch 297/500, Training Loss: nan, Validation Loss: nanEpoch 298/500, Training Loss: nan, Validation Loss: nanEpoch 299/500, Training Loss: nan, Validation Loss: nanEpoch 300/500, Training Loss: nan, Validation Loss: nanEpoch 301/500, Training Loss: nan, Validation Loss: nanEpoch 302/500, Training Loss: nan, Validation Loss: nanEpoch 303/500, Training Loss: nan, Validation Loss: nanEpoch 304/500, Training Loss: nan, Validation Loss: nanEpoch 305/500, Training Loss: nan, Validation Loss: nanEpoch 306/500, Training Loss: nan, Validation Loss: nanEpoch 307/500, Training Loss: nan, Validation Loss: nanEpoch 308/500, Training Loss: nan, Validation Loss: nanEpoch 309/500, Training Loss: nan, Validation Loss: nanEpoch 310/500, Training Loss: nan, Validation Loss: nanEpoch 311/500, Training Loss: nan, Validation Loss: nanEpoch 312/500, Training Loss: nan, Validation Loss: nanEpoch 313/500, Training Loss: nan, Validation Loss: nanEpoch 314/500, Training Loss: nan, Validation Loss: nanEpoch 315/500, Training Loss: nan, Validation Loss: nanEpoch 316/500, Training Loss: nan, Validation Loss: nanEpoch 317/500, Training Loss: nan, Validation Loss: nanEpoch 318/500, Training Loss: nan, Validation Loss: nanEpoch 319/500, Training Loss: nan, Validation Loss: nanEpoch 320/500, Training Loss: nan, Validation Loss: nanEpoch 321/500, Training Loss: nan, Validation Loss: nanEpoch 322/500, Training Loss: nan, Validation Loss: nanEpoch 323/500, Training Loss: nan, Validation Loss: nanEpoch 324/500, Training Loss: nan, Validation Loss: nanEpoch 325/500, Training Loss: nan, Validation Loss: nanEpoch 326/500, Training Loss: nan, Validation Loss: nanEpoch 327/500, Training Loss: nan, Validation Loss: nanEpoch 328/500, Training Loss: nan, Validation Loss: nanEpoch 329/500, Training Loss: nan, Validation Loss: nanEpoch 330/500, Training Loss: nan, Validation Loss: nanEpoch 331/500, Training Loss: nan, Validation Loss: nanEpoch 332/500, Training Loss: nan, Validation Loss: nanEpoch 333/500, Training Loss: nan, Validation Loss: nanEpoch 334/500, Training Loss: nan, Validation Loss: nanEpoch 335/500, Training Loss: nan, Validation Loss: nanEpoch 336/500, Training Loss: nan, Validation Loss: nanEpoch 337/500, Training Loss: nan, Validation Loss: nanEpoch 338/500, Training Loss: nan, Validation Loss: nanEpoch 339/500, Training Loss: nan, Validation Loss: nanEpoch 340/500, Training Loss: nan, Validation Loss: nanEpoch 341/500, Training Loss: nan, Validation Loss: nanEpoch 342/500, Training Loss: nan, Validation Loss: nanEpoch 343/500, Training Loss: nan, Validation Loss: nanEpoch 344/500, Training Loss: nan, Validation Loss: nanEpoch 345/500, Training Loss: nan, Validation Loss: nanEpoch 346/500, Training Loss: nan, Validation Loss: nanEpoch 347/500, Training Loss: nan, Validation Loss: nanEpoch 348/500, Training Loss: nan, Validation Loss: nanEpoch 349/500, Training Loss: nan, Validation Loss: nanEpoch 350/500, Training Loss: nan, Validation Loss: nanEpoch 351/500, Training Loss: nan, Validation Loss: nanEpoch 352/500, Training Loss: nan, Validation Loss: nanEpoch 353/500, Training Loss: nan, Validation Loss: nanEpoch 354/500, Training Loss: nan, Validation Loss: nanEpoch 355/500, Training Loss: nan, Validation Loss: nanEpoch 356/500, Training Loss: nan, Validation Loss: nanEpoch 357/500, Training Loss: nan, Validation Loss: nanEpoch 358/500, Training Loss: nan, Validation Loss: nanEpoch 359/500, Training Loss: nan, Validation Loss: nanEpoch 360/500, Training Loss: nan, Validation Loss: nanEpoch 361/500, Training Loss: nan, Validation Loss: nanEpoch 362/500, Training Loss: nan, Validation Loss: nanEpoch 363/500, Training Loss: nan, Validation Loss: nanEpoch 364/500, Training Loss: nan, Validation Loss: nanEpoch 365/500, Training Loss: nan, Validation Loss: nanEpoch 366/500, Training Loss: nan, Validation Loss: nanEpoch 367/500, Training Loss: nan, Validation Loss: nanEpoch 368/500, Training Loss: nan, Validation Loss: nanEpoch 369/500, Training Loss: nan, Validation Loss: nanEpoch 370/500, Training Loss: nan, Validation Loss: nanEpoch 371/500, Training Loss: nan, Validation Loss: nanEpoch 372/500, Training Loss: nan, Validation Loss: nanEpoch 373/500, Training Loss: nan, Validation Loss: nanEpoch 374/500, Training Loss: nan, Validation Loss: nanEpoch 375/500, Training Loss: nan, Validation Loss: nanEpoch 376/500, Training Loss: nan, Validation Loss: nanEpoch 377/500, Training Loss: nan, Validation Loss: nanEpoch 378/500, Training Loss: nan, Validation Loss: nanEpoch 379/500, Training Loss: nan, Validation Loss: nanEpoch 380/500, Training Loss: nan, Validation Loss: nanEpoch 381/500, Training Loss: nan, Validation Loss: nanEpoch 382/500, Training Loss: nan, Validation Loss: nanEpoch 383/500, Training Loss: nan, Validation Loss: nanEpoch 384/500, Training Loss: nan, Validation Loss: nanEpoch 385/500, Training Loss: nan, Validation Loss: nanEpoch 386/500, Training Loss: nan, Validation Loss: nanEpoch 387/500, Training Loss: nan, Validation Loss: nanEpoch 388/500, Training Loss: nan, Validation Loss: nanEpoch 389/500, Training Loss: nan, Validation Loss: nanEpoch 390/500, Training Loss: nan, Validation Loss: nanEpoch 391/500, Training Loss: nan, Validation Loss: nanEpoch 392/500, Training Loss: nan, Validation Loss: nanEpoch 393/500, Training Loss: nan, Validation Loss: nanEpoch 394/500, Training Loss: nan, Validation Loss: nanEpoch 395/500, Training Loss: nan, Validation Loss: nanEpoch 396/500, Training Loss: nan, Validation Loss: nanEpoch 397/500, Training Loss: nan, Validation Loss: nanEpoch 398/500, Training Loss: nan, Validation Loss: nanEpoch 399/500, Training Loss: nan, Validation Loss: nanEpoch 400/500, Training Loss: nan, Validation Loss: nanEpoch 401/500, Training Loss: nan, Validation Loss: nanEpoch 402/500, Training Loss: nan, Validation Loss: nanEpoch 403/500, Training Loss: nan, Validation Loss: nanEpoch 404/500, Training Loss: nan, Validation Loss: nanEpoch 405/500, Training Loss: nan, Validation Loss: nanEpoch 406/500, Training Loss: nan, Validation Loss: nanEpoch 407/500, Training Loss: nan, Validation Loss: nanEpoch 408/500, Training Loss: nan, Validation Loss: nanEpoch 409/500, Training Loss: nan, Validation Loss: nanEpoch 410/500, Training Loss: nan, Validation Loss: nanEpoch 411/500, Training Loss: nan, Validation Loss: nanEpoch 412/500, Training Loss: nan, Validation Loss: nanEpoch 413/500, Training Loss: nan, Validation Loss: nanEpoch 414/500, Training Loss: nan, Validation Loss: nanEpoch 415/500, Training Loss: nan, Validation Loss: nanEpoch 416/500, Training Loss: nan, Validation Loss: nanEpoch 417/500, Training Loss: nan, Validation Loss: nanEpoch 418/500, Training Loss: nan, Validation Loss: nanEpoch 419/500, Training Loss: nan, Validation Loss: nanEpoch 420/500, Training Loss: nan, Validation Loss: nanEpoch 421/500, Training Loss: nan, Validation Loss: nanEpoch 422/500, Training Loss: nan, Validation Loss: nanEpoch 423/500, Training Loss: nan, Validation Loss: nanEpoch 424/500, Training Loss: nan, Validation Loss: nanEpoch 425/500, Training Loss: nan, Validation Loss: nanEpoch 426/500, Training Loss: nan, Validation Loss: nanEpoch 427/500, Training Loss: nan, Validation Loss: nanEpoch 428/500, Training Loss: nan, Validation Loss: nanEpoch 429/500, Training Loss: nan, Validation Loss: nanEpoch 430/500, Training Loss: nan, Validation Loss: nanEpoch 431/500, Training Loss: nan, Validation Loss: nanEpoch 432/500, Training Loss: nan, Validation Loss: nanEpoch 433/500, Training Loss: nan, Validation Loss: nanEpoch 434/500, Training Loss: nan, Validation Loss: nanEpoch 435/500, Training Loss: nan, Validation Loss: nanEpoch 436/500, Training Loss: nan, Validation Loss: nanEpoch 437/500, Training Loss: nan, Validation Loss: nanEpoch 438/500, Training Loss: nan, Validation Loss: nanEpoch 439/500, Training Loss: nan, Validation Loss: nanEpoch 440/500, Training Loss: nan, Validation Loss: nanEpoch 441/500, Training Loss: nan, Validation Loss: nanEpoch 442/500, Training Loss: nan, Validation Loss: nanEpoch 443/500, Training Loss: nan, Validation Loss: nanEpoch 444/500, Training Loss: nan, Validation Loss: nanEpoch 445/500, Training Loss: nan, Validation Loss: nanEpoch 446/500, Training Loss: nan, Validation Loss: nanEpoch 447/500, Training Loss: nan, Validation Loss: nanEpoch 448/500, Training Loss: nan, Validation Loss: nanEpoch 449/500, Training Loss: nan, Validation Loss: nanEpoch 450/500, Training Loss: nan, Validation Loss: nanEpoch 451/500, Training Loss: nan, Validation Loss: nanEpoch 452/500, Training Loss: nan, Validation Loss: nanEpoch 453/500, Training Loss: nan, Validation Loss: nanEpoch 454/500, Training Loss: nan, Validation Loss: nanEpoch 455/500, Training Loss: nan, Validation Loss: nanEpoch 456/500, Training Loss: nan, Validation Loss: nanEpoch 457/500, Training Loss: nan, Validation Loss: nanEpoch 458/500, Training Loss: nan, Validation Loss: nanEpoch 459/500, Training Loss: nan, Validation Loss: nanEpoch 460/500, Training Loss: nan, Validation Loss: nanEpoch 461/500, Training Loss: nan, Validation Loss: nanEpoch 462/500, Training Loss: nan, Validation Loss: nanEpoch 463/500, Training Loss: nan, Validation Loss: nanEpoch 464/500, Training Loss: nan, Validation Loss: nanEpoch 465/500, Training Loss: nan, Validation Loss: nanEpoch 466/500, Training Loss: nan, Validation Loss: nanEpoch 467/500, Training Loss: nan, Validation Loss: nanEpoch 468/500, Training Loss: nan, Validation Loss: nanEpoch 469/500, Training Loss: nan, Validation Loss: nanEpoch 470/500, Training Loss: nan, Validation Loss: nanEpoch 471/500, Training Loss: nan, Validation Loss: nanEpoch 472/500, Training Loss: nan, Validation Loss: nanEpoch 473/500, Training Loss: nan, Validation Loss: nanEpoch 474/500, Training Loss: nan, Validation Loss: nanEpoch 475/500, Training Loss: nan, Validation Loss: nanEpoch 476/500, Training Loss: nan, Validation Loss: nanEpoch 477/500, Training Loss: nan, Validation Loss: nanEpoch 478/500, Training Loss: nan, Validation Loss: nanEpoch 479/500, Training Loss: nan, Validation Loss: nanEpoch 480/500, Training Loss: nan, Validation Loss: nanEpoch 481/500, Training Loss: nan, Validation Loss: nanEpoch 482/500, Training Loss: nan, Validation Loss: nanEpoch 483/500, Training Loss: nan, Validation Loss: nanEpoch 484/500, Training Loss: nan, Validation Loss: nanEpoch 485/500, Training Loss: nan, Validation Loss: nanEpoch 486/500, Training Loss: nan, Validation Loss: nanEpoch 487/500, Training Loss: nan, Validation Loss: nanEpoch 488/500, Training Loss: nan, Validation Loss: nanEpoch 489/500, Training Loss: nan, Validation Loss: nanEpoch 490/500, Training Loss: nan, Validation Loss: nanEpoch 491/500, Training Loss: nan, Validation Loss: nanEpoch 492/500, Training Loss: nan, Validation Loss: nanEpoch 493/500, Training Loss: nan, Validation Loss: nanEpoch 494/500, Training Loss: nan, Validation Loss: nanEpoch 495/500, Training Loss: nan, Validation Loss: nanEpoch 496/500, Training Loss: nan, Validation Loss: nanEpoch 497/500, Training Loss: nan, Validation Loss: nanEpoch 498/500, Training Loss: nan, Validation Loss: nanEpoch 499/500, Training Loss: nan, Validation Loss: nanEpoch 500/500, Training Loss: nan, Validation Loss: nanEpoch 1/800, Training Loss: 0.6939003197367106, Validation Loss: 0.6928766287900905Epoch 2/800, Training Loss: 0.6938223257360078, Validation Loss: 0.692983847642809Epoch 3/800, Training Loss: 0.6938465039206423, Validation Loss: 0.6930401010985932Epoch 4/800, Training Loss: 0.6938626962240069, Validation Loss: 0.6930631776639885Epoch 5/800, Training Loss: 0.6938697069675719, Validation Loss: 0.6930720342459611Epoch 6/800, Training Loss: 0.6938724726245857, Validation Loss: 0.6930753667232643Epoch 7/800, Training Loss: 0.6938735519737651, Validation Loss: 0.6930766203791336Epoch 8/800, Training Loss: 0.693873991255101, Validation Loss: 0.6930771003917578Epoch 9/800, Training Loss: 0.6938741909812594, Validation Loss: 0.6930772935640711Epoch 10/800, Training Loss: 0.6938743011099857, Validation Loss: 0.6930773804418722Epoch 11/800, Training Loss: 0.6938743773528994, Validation Loss: 0.6930774278487455Epoch 12/800, Training Loss: 0.6938744403589908, Validation Loss: 0.6930774605008814Epoch 13/800, Training Loss: 0.6938744977850332, Validation Loss: 0.6930774875404323Epoch 14/800, Training Loss: 0.6938745524805832, Validation Loss: 0.6930775123516734Epoch 15/800, Training Loss: 0.6938746055172288, Validation Loss: 0.693077536190468Epoch 16/800, Training Loss: 0.6938746573088245, Validation Loss: 0.6930775595259161Epoch 17/800, Training Loss: 0.6938747080256447, Validation Loss: 0.693077582536249Epoch 18/800, Training Loss: 0.6938747577474808, Validation Loss: 0.693077605292105Epoch 19/800, Training Loss: 0.693874806520238, Validation Loss: 0.6930776278242672Epoch 20/800, Training Loss: 0.6938748543768735, Validation Loss: 0.6930776501487041Epoch 21/800, Training Loss: 0.693874901345138, Validation Loss: 0.6930776722758206Epoch 22/800, Training Loss: 0.69387494745046, Validation Loss: 0.6930776942138858Epoch 23/800, Training Loss: 0.6938749927170167, Validation Loss: 0.6930777159703018Epoch 24/800, Training Loss: 0.6938750371681492, Validation Loss: 0.6930777375520777Epoch 25/800, Training Loss: 0.6938750808265206, Validation Loss: 0.6930777589660064Epoch 26/800, Training Loss: 0.6938751237141924, Validation Loss: 0.6930777802187373Epoch 27/800, Training Loss: 0.6938751658526624, Validation Loss: 0.6930778013168035Epoch 28/800, Training Loss: 0.6938752072628848, Validation Loss: 0.6930778222666382Epoch 29/800, Training Loss: 0.6938752479652966, Validation Loss: 0.6930778430745833Epoch 30/800, Training Loss: 0.6938752879798302, Validation Loss: 0.6930778637468952Epoch 31/800, Training Loss: 0.6938753273259322, Validation Loss: 0.6930778842897535Epoch 32/800, Training Loss: 0.6938753660225782, Validation Loss: 0.6930779047092643Epoch 33/800, Training Loss: 0.6938754040882885, Validation Loss: 0.6930779250114664Epoch 34/800, Training Loss: 0.6938754415411417, Validation Loss: 0.6930779452023368Epoch 35/800, Training Loss: 0.6938754783987873, Validation Loss: 0.6930779652877951Epoch 36/800, Training Loss: 0.6938755146784652, Validation Loss: 0.6930779852737109Epoch 37/800, Training Loss: 0.6938755503970104, Validation Loss: 0.6930780051659065Epoch 38/800, Training Loss: 0.6938755855708745, Validation Loss: 0.6930780249701638Epoch 39/800, Training Loss: 0.6938756202161303, Validation Loss: 0.6930780446922272Epoch 40/800, Training Loss: 0.6938756543484869, Validation Loss: 0.6930780643378108Epoch 41/800, Training Loss: 0.6938756879833036, Validation Loss: 0.6930780839126018Epoch 42/800, Training Loss: 0.6938757211355987, Validation Loss: 0.6930781034222661Epoch 43/800, Training Loss: 0.6938757538200577, Validation Loss: 0.6930781228724535Epoch 44/800, Training Loss: 0.6938757860510507, Validation Loss: 0.693078142268801Epoch 45/800, Training Loss: 0.6938758178426337, Validation Loss: 0.6930781616169402Epoch 46/800, Training Loss: 0.6938758492085696, Validation Loss: 0.6930781809224988Epoch 47/800, Training Loss: 0.6938758801623257, Validation Loss: 0.6930782001911097Epoch 48/800, Training Loss: 0.6938759107170903, Validation Loss: 0.6930782194284113Epoch 49/800, Training Loss: 0.6938759408857818, Validation Loss: 0.6930782386400568Epoch 50/800, Training Loss: 0.6938759706810529, Validation Loss: 0.6930782578317152Epoch 51/800, Training Loss: 0.6938760001153041, Validation Loss: 0.6930782770090796Epoch 52/800, Training Loss: 0.6938760292006865, Validation Loss: 0.6930782961778705Epoch 53/800, Training Loss: 0.6938760579491153, Validation Loss: 0.69307831534384Epoch 54/800, Training Loss: 0.693876086372272, Validation Loss: 0.6930783345127798Epoch 55/800, Training Loss: 0.6938761144816171, Validation Loss: 0.6930783536905236Epoch 56/800, Training Loss: 0.6938761422883912, Validation Loss: 0.6930783728829548Epoch 57/800, Training Loss: 0.6938761698036295, Validation Loss: 0.6930783920960092Epoch 58/800, Training Loss: 0.6938761970381571, Validation Loss: 0.6930784113356839Epoch 59/800, Training Loss: 0.6938762240026103, Validation Loss: 0.6930784306080395Epoch 60/800, Training Loss: 0.6938762507074283, Validation Loss: 0.6930784499192089Epoch 61/800, Training Loss: 0.6938762771628695, Validation Loss: 0.6930784692753997Epoch 62/800, Training Loss: 0.6938763033790123, Validation Loss: 0.6930784886829044Epoch 63/800, Training Loss: 0.6938763293657584, Validation Loss: 0.6930785081481037Epoch 64/800, Training Loss: 0.6938763551328476, Validation Loss: 0.6930785276774727Epoch 65/800, Training Loss: 0.6938763806898514, Validation Loss: 0.6930785472775897Epoch 66/800, Training Loss: 0.6938764060461844, Validation Loss: 0.6930785669551403Epoch 67/800, Training Loss: 0.6938764312111083, Validation Loss: 0.6930785867169261Epoch 68/800, Training Loss: 0.6938764561937355, Validation Loss: 0.6930786065698713Epoch 69/800, Training Loss: 0.6938764810030316, Validation Loss: 0.6930786265210299Epoch 70/800, Training Loss: 0.6938765056478226, Validation Loss: 0.6930786465775929Epoch 71/800, Training Loss: 0.6938765301367966, Validation Loss: 0.6930786667468978Epoch 72/800, Training Loss: 0.6938765544785083, Validation Loss: 0.693078687036434Epoch 73/800, Training Loss: 0.6938765786813808, Validation Loss: 0.6930787074538542Epoch 74/800, Training Loss: 0.6938766027537102, Validation Loss: 0.6930787280069807Epoch 75/800, Training Loss: 0.6938766267036675, Validation Loss: 0.6930787487038157Epoch 76/800, Training Loss: 0.6938766505393015, Validation Loss: 0.6930787695525514Epoch 77/800, Training Loss: 0.6938766742685413, Validation Loss: 0.6930787905615781Epoch 78/800, Training Loss: 0.6938766978991963, Validation Loss: 0.693078811739495Epoch 79/800, Training Loss: 0.6938767214389607, Validation Loss: 0.6930788330951211Epoch 80/800, Training Loss: 0.6938767448954148, Validation Loss: 0.693078854637507Epoch 81/800, Training Loss: 0.6938767682760212, Validation Loss: 0.6930788763759433Epoch 82/800, Training Loss: 0.6938767915881323, Validation Loss: 0.6930788983199772Epoch 83/800, Training Loss: 0.6938768148389861, Validation Loss: 0.6930789204794212Epoch 84/800, Training Loss: 0.6938768380357083, Validation Loss: 0.693078942864368Epoch 85/800, Training Loss: 0.6938768611853107, Validation Loss: 0.6930789654852029Epoch 86/800, Training Loss: 0.6938768842946905, Validation Loss: 0.6930789883526198Epoch 87/800, Training Loss: 0.6938769073706299, Validation Loss: 0.6930790114776336Epoch 88/800, Training Loss: 0.693876930419792, Validation Loss: 0.6930790348715992Epoch 89/800, Training Loss: 0.6938769534487239, Validation Loss: 0.6930790585462244Epoch 90/800, Training Loss: 0.6938769764638445, Validation Loss: 0.6930790825135882Epoch 91/800, Training Loss: 0.6938769994714493, Validation Loss: 0.6930791067861606Epoch 92/800, Training Loss: 0.6938770224777043, Validation Loss: 0.6930791313768175Epoch 93/800, Training Loss: 0.69387704548864, Validation Loss: 0.6930791562988639Epoch 94/800, Training Loss: 0.6938770685101432, Validation Loss: 0.6930791815660522Epoch 95/800, Training Loss: 0.6938770915479553, Validation Loss: 0.693079207192605Epoch 96/800, Training Loss: 0.693877114607663, Validation Loss: 0.6930792331932369Epoch 97/800, Training Loss: 0.6938771376946884, Validation Loss: 0.6930792595831796Epoch 98/800, Training Loss: 0.6938771608142809, Validation Loss: 0.6930792863782059Epoch 99/800, Training Loss: 0.6938771839715068, Validation Loss: 0.6930793135946554Epoch 100/800, Training Loss: 0.6938772071712369, Validation Loss: 0.6930793412494628Epoch 101/800, Training Loss: 0.6938772304181328, Validation Loss: 0.6930793693601871Epoch 102/800, Training Loss: 0.6938772537166314, Validation Loss: 0.6930793979450424Epoch 103/800, Training Loss: 0.6938772770709297, Validation Loss: 0.6930794270229286Epoch 104/800, Training Loss: 0.6938773004849674, Validation Loss: 0.6930794566134653Epoch 105/800, Training Loss: 0.6938773239624033, Validation Loss: 0.6930794867370291Epoch 106/800, Training Loss: 0.6938773475065926, Validation Loss: 0.6930795174147881Epoch 107/800, Training Loss: 0.693877371120565, Validation Loss: 0.6930795486687431Epoch 108/800, Training Loss: 0.6938773948069966, Validation Loss: 0.6930795805217673Epoch 109/800, Training Loss: 0.6938774185681733, Validation Loss: 0.6930796129976524Epoch 110/800, Training Loss: 0.6938774424059655, Validation Loss: 0.6930796461211508Epoch 111/800, Training Loss: 0.6938774663217824, Validation Loss: 0.6930796799180263Epoch 112/800, Training Loss: 0.693877490316536, Validation Loss: 0.6930797144151031Epoch 113/800, Training Loss: 0.6938775143905891, Validation Loss: 0.6930797496403205Epoch 114/800, Training Loss: 0.6938775385437136, Validation Loss: 0.6930797856227864Epoch 115/800, Training Loss: 0.6938775627750223, Validation Loss: 0.6930798223928403Epoch 116/800, Training Loss: 0.6938775870829181, Validation Loss: 0.6930798599821114Epoch 117/800, Training Loss: 0.6938776114650148, Validation Loss: 0.6930798984235855Epoch 118/800, Training Loss: 0.6938776359180677, Validation Loss: 0.6930799377516741Epoch 119/800, Training Loss: 0.6938776604378865, Validation Loss: 0.6930799780022833Epoch 120/800, Training Loss: 0.6938776850192411, Validation Loss: 0.6930800192128943Epoch 121/800, Training Loss: 0.6938777096557586, Validation Loss: 0.6930800614226384Epoch 122/800, Training Loss: 0.693877734339807, Validation Loss: 0.6930801046723826Epoch 123/800, Training Loss: 0.6938777590623675, Validation Loss: 0.6930801490048151Epoch 124/800, Training Loss: 0.6938777838128943, Validation Loss: 0.6930801944645367Epoch 125/800, Training Loss: 0.6938778085791529, Validation Loss: 0.6930802410981582Epoch 126/800, Training Loss: 0.6938778333470489, Validation Loss: 0.6930802889543978Epoch 127/800, Training Loss: 0.6938778581004285, Validation Loss: 0.6930803380841823Epoch 128/800, Training Loss: 0.6938778828208616, Validation Loss: 0.6930803885407613Epoch 129/800, Training Loss: 0.693877907487403, Validation Loss: 0.6930804403798119Epoch 130/800, Training Loss: 0.6938779320763131, Validation Loss: 0.6930804936595576Epoch 131/800, Training Loss: 0.6938779565607633, Validation Loss: 0.6930805484408871Epoch 132/800, Training Loss: 0.6938779809104914, Validation Loss: 0.6930806047874742Epoch 133/800, Training Loss: 0.6938780050914252, Validation Loss: 0.6930806627659054Epoch 134/800, Training Loss: 0.6938780290652545, Validation Loss: 0.6930807224458014Epoch 135/800, Training Loss: 0.6938780527889576, Validation Loss: 0.6930807838999475Epoch 136/800, Training Loss: 0.6938780762142622, Validation Loss: 0.6930808472044194Epoch 137/800, Training Loss: 0.6938780992870404, Validation Loss: 0.6930809124387056Epoch 138/800, Training Loss: 0.6938781219466355, Validation Loss: 0.6930809796858282Epoch 139/800, Training Loss: 0.6938781441250874, Validation Loss: 0.6930810490324573Epoch 140/800, Training Loss: 0.6938781657462714, Validation Loss: 0.693081120569013Epoch 141/800, Training Loss: 0.6938781867249205, Validation Loss: 0.6930811943897558Epoch 142/800, Training Loss: 0.6938782069655065, Validation Loss: 0.6930812705928571Epoch 143/800, Training Loss: 0.6938782263609915, Validation Loss: 0.6930813492804464Epoch 144/800, Training Loss: 0.693878244791388, Validation Loss: 0.6930814305586239Epoch 145/800, Training Loss: 0.6938782621221325, Validation Loss: 0.6930815145374358Epoch 146/800, Training Loss: 0.6938782782022251, Validation Loss: 0.6930816013307924Epoch 147/800, Training Loss: 0.6938782928621067, Validation Loss: 0.6930816910563234Epoch 148/800, Training Loss: 0.6938783059112255, Validation Loss: 0.6930817838351471Epoch 149/800, Training Loss: 0.6938783171352493, Validation Loss: 0.6930818797915366Epoch 150/800, Training Loss: 0.6938783262928572, Validation Loss: 0.6930819790524542Epoch 151/800, Training Loss: 0.6938783331120504, Validation Loss: 0.6930820817469227Epoch 152/800, Training Loss: 0.693878337285888, Validation Loss: 0.6930821880051936Epoch 153/800, Training Loss: 0.6938783384675572, Validation Loss: 0.6930822979576655Epoch 154/800, Training Loss: 0.6938783362646558, Validation Loss: 0.693082411733485Epoch 155/800, Training Loss: 0.6938783302325418, Validation Loss: 0.693082529458765Epoch 156/800, Training Loss: 0.6938783198665925, Validation Loss: 0.6930826512543126Epoch 157/800, Training Loss: 0.693878304593144, Validation Loss: 0.693082777232763Epoch 158/800, Training Loss: 0.6938782837588925, Validation Loss: 0.6930829074949607Epoch 159/800, Training Loss: 0.6938782566184162, Validation Loss: 0.6930830421254106Epoch 160/800, Training Loss: 0.6938782223194901, Validation Loss: 0.6930831811865654Epoch 161/800, Training Loss: 0.6938781798857033, Validation Loss: 0.6930833247116515Epoch 162/800, Training Loss: 0.6938781281958487, Validation Loss: 0.6930834726956746Epoch 163/800, Training Loss: 0.6938780659593815, Validation Loss: 0.6930836250841227Epoch 164/800, Training Loss: 0.6938779916871056, Validation Loss: 0.6930837817587696Epoch 165/800, Training Loss: 0.6938779036560113, Validation Loss: 0.693083942519818Epoch 166/800, Training Loss: 0.6938777998669654, Validation Loss: 0.6930841070633835Epoch 167/800, Training Loss: 0.6938776779935341, Validation Loss: 0.6930842749530537Epoch 168/800, Training Loss: 0.6938775353198724, Validation Loss: 0.6930844455838541Epoch 169/800, Training Loss: 0.6938773686649609, Validation Loss: 0.6930846181364736Epoch 170/800, Training Loss: 0.6938771742897566, Validation Loss: 0.6930847915189045Epoch 171/800, Training Loss: 0.693876947782838, Validation Loss: 0.6930849642917537Epoch 172/800, Training Loss: 0.6938766839187884, Validation Loss: 0.6930851345722651Epoch 173/800, Training Loss: 0.6938763764818678, Validation Loss: 0.6930852999104034Epoch 174/800, Training Loss: 0.6938760180451099, Validation Loss: 0.693085457128058Epoch 175/800, Training Loss: 0.6938755996918742, Validation Loss: 0.693085602109254Epoch 176/800, Training Loss: 0.6938751106624668, Validation Loss: 0.6930857295247855Epoch 177/800, Training Loss: 0.6938745379025093, Validation Loss: 0.6930858324684384Epoch 178/800, Training Loss: 0.6938738654813212, Validation Loss: 0.6930859019730206Epoch 179/800, Training Loss: 0.6938730738368536, Validation Loss: 0.6930859263615737Epoch 180/800, Training Loss: 0.6938721387869555, Validation Loss: 0.6930858903704221Epoch 181/800, Training Loss: 0.6938710302226098, Validation Loss: 0.693085773953176Epoch 182/800, Training Loss: 0.6938697103636255, Validation Loss: 0.6930855506336608Epoch 183/800, Training Loss: 0.6938681314052013, Validation Loss: 0.6930851852135527Epoch 184/800, Training Loss: 0.6938662323057481, Validation Loss: 0.6930846305449239Epoch 185/800, Training Loss: 0.6938639343473232, Validation Loss: 0.6930838229287206Epoch 186/800, Training Loss: 0.6938611349155502, Validation Loss: 0.6930826754631134Epoch 187/800, Training Loss: 0.6938576986544047, Validation Loss: 0.6930810682816515Epoch 188/800, Training Loss: 0.6938534446813657, Validation Loss: 0.6930788339857751Epoch 189/800, Training Loss: 0.6938481277739372, Validation Loss: 0.6930757355004626Epoch 190/800, Training Loss: 0.6938414101301686, Validation Loss: 0.6930714317130422Epoch 191/800, Training Loss: 0.6938328180350591, Validation Loss: 0.6930654229156166Epoch 192/800, Training Loss: 0.6938216737026082, Validation Loss: 0.6930569619112388Epoch 193/800, Training Loss: 0.6938069850469121, Validation Loss: 0.6930449048683152Epoch 194/800, Training Loss: 0.6937872616859093, Validation Loss: 0.6930274525713263Epoch 195/800, Training Loss: 0.6937601964673882, Validation Loss: 0.6930016838704675Epoch 196/800, Training Loss: 0.6937220905590398, Validation Loss: 0.6929626757631943Epoch 197/800, Training Loss: 0.6936667630947323, Validation Loss: 0.6929017533731976Epoch 198/800, Training Loss: 0.6935833578656014, Validation Loss: 0.6928027805451986Epoch 199/800, Training Loss: 0.6934516043064526, Validation Loss: 0.6926336603529306Epoch 200/800, Training Loss: 0.6932306258781605, Validation Loss: 0.6923248637773101Epoch 201/800, Training Loss: 0.6928293209571565, Validation Loss: 0.6917079030070854Epoch 202/800, Training Loss: 0.6920151267201614, Validation Loss: 0.6903064825726297Epoch 203/800, Training Loss: 0.6900663081643603, Validation Loss: 0.6864360062420494Epoch 204/800, Training Loss: 0.6839551444442751, Validation Loss: 0.671691945988617Epoch 205/800, Training Loss: 0.6542176587108988, Validation Loss: 0.5841278146810339Epoch 206/800, Training Loss: 0.5329941413994895, Validation Loss: 0.361232672826195Epoch 207/800, Training Loss: 0.4443562681443122, Validation Loss: 0.2879834687102758Epoch 208/800, Training Loss: 0.42207208463912826, Validation Loss: 0.2817321679589416Epoch 209/800, Training Loss: 0.42368483035488946, Validation Loss: 0.3413475985828377Epoch 210/800, Training Loss: 0.45976529147507683, Validation Loss: 0.4418708923760786Epoch 211/800, Training Loss: 0.5275510151926448, Validation Loss: 0.5689241645218946Epoch 212/800, Training Loss: 0.6095752995221706, Validation Loss: 0.6941525795767431Epoch 213/800, Training Loss: 0.6622816084437991, Validation Loss: 0.739887463804193Epoch 214/800, Training Loss: 0.6883221748288676, Validation Loss: 0.7390807256764493Epoch 215/800, Training Loss: 0.6752705695324653, Validation Loss: 0.7441543418960609Epoch 216/800, Training Loss: 0.6858075743757779, Validation Loss: 0.7110802836188534Epoch 217/800, Training Loss: 0.6665779391383325, Validation Loss: 0.7084314536905121Epoch 218/800, Training Loss: 0.7185605000363643, Validation Loss: 0.8266375649194689Epoch 219/800, Training Loss: 0.8339714189764351, Validation Loss: 1.4305948491635279Epoch 220/800, Training Loss: 1.1106401869003597, Validation Loss: 1.4416307280612164Epoch 221/800, Training Loss: 1.3406257419446803, Validation Loss: 1.8301747686636856Epoch 222/800, Training Loss: 1.9935548102143965, Validation Loss: 2.943032023165615Epoch 223/800, Training Loss: 2.1478897131889996, Validation Loss: 3.281608540185394Epoch 224/800, Training Loss: 3.165442261089324, Validation Loss: 4.349856220014183Epoch 225/800, Training Loss: 5.59620309247097, Validation Loss: 9.467874267865644Epoch 226/800, Training Loss: nan, Validation Loss: nanEpoch 227/800, Training Loss: nan, Validation Loss: nanEpoch 228/800, Training Loss: nan, Validation Loss: nanEpoch 229/800, Training Loss: nan, Validation Loss: nanEpoch 230/800, Training Loss: nan, Validation Loss: nanEpoch 231/800, Training Loss: nan, Validation Loss: nanEpoch 232/800, Training Loss: nan, Validation Loss: nanEpoch 233/800, Training Loss: nan, Validation Loss: nanEpoch 234/800, Training Loss: nan, Validation Loss: nanEpoch 235/800, Training Loss: nan, Validation Loss: nanEpoch 236/800, Training Loss: nan, Validation Loss: nanEpoch 237/800, Training Loss: nan, Validation Loss: nanEpoch 238/800, Training Loss: nan, Validation Loss: nanEpoch 239/800, Training Loss: nan, Validation Loss: nanEpoch 240/800, Training Loss: nan, Validation Loss: nanEpoch 241/800, Training Loss: nan, Validation Loss: nanEpoch 242/800, Training Loss: nan, Validation Loss: nanEpoch 243/800, Training Loss: nan, Validation Loss: nanEpoch 244/800, Training Loss: nan, Validation Loss: nanEpoch 245/800, Training Loss: nan, Validation Loss: nanEpoch 246/800, Training Loss: nan, Validation Loss: nanEpoch 247/800, Training Loss: nan, Validation Loss: nanEpoch 248/800, Training Loss: nan, Validation Loss: nanEpoch 249/800, Training Loss: nan, Validation Loss: nanEpoch 250/800, Training Loss: nan, Validation Loss: nanEpoch 251/800, Training Loss: nan, Validation Loss: nanEpoch 252/800, Training Loss: nan, Validation Loss: nanEpoch 253/800, Training Loss: nan, Validation Loss: nanEpoch 254/800, Training Loss: nan, Validation Loss: nanEpoch 255/800, Training Loss: nan, Validation Loss: nanEpoch 256/800, Training Loss: nan, Validation Loss: nanEpoch 257/800, Training Loss: nan, Validation Loss: nanEpoch 258/800, Training Loss: nan, Validation Loss: nanEpoch 259/800, Training Loss: nan, Validation Loss: nanEpoch 260/800, Training Loss: nan, Validation Loss: nanEpoch 261/800, Training Loss: nan, Validation Loss: nanEpoch 262/800, Training Loss: nan, Validation Loss: nanEpoch 263/800, Training Loss: nan, Validation Loss: nanEpoch 264/800, Training Loss: nan, Validation Loss: nanEpoch 265/800, Training Loss: nan, Validation Loss: nanEpoch 266/800, Training Loss: nan, Validation Loss: nanEpoch 267/800, Training Loss: nan, Validation Loss: nanEpoch 268/800, Training Loss: nan, Validation Loss: nanEpoch 269/800, Training Loss: nan, Validation Loss: nanEpoch 270/800, Training Loss: nan, Validation Loss: nanEpoch 271/800, Training Loss: nan, Validation Loss: nanEpoch 272/800, Training Loss: nan, Validation Loss: nanEpoch 273/800, Training Loss: nan, Validation Loss: nanEpoch 274/800, Training Loss: nan, Validation Loss: nanEpoch 275/800, Training Loss: nan, Validation Loss: nanEpoch 276/800, Training Loss: nan, Validation Loss: nanEpoch 277/800, Training Loss: nan, Validation Loss: nanEpoch 278/800, Training Loss: nan, Validation Loss: nanEpoch 279/800, Training Loss: nan, Validation Loss: nanEpoch 280/800, Training Loss: nan, Validation Loss: nanEpoch 281/800, Training Loss: nan, Validation Loss: nanEpoch 282/800, Training Loss: nan, Validation Loss: nanEpoch 283/800, Training Loss: nan, Validation Loss: nanEpoch 284/800, Training Loss: nan, Validation Loss: nanEpoch 285/800, Training Loss: nan, Validation Loss: nanEpoch 286/800, Training Loss: nan, Validation Loss: nanEpoch 287/800, Training Loss: nan, Validation Loss: nanEpoch 288/800, Training Loss: nan, Validation Loss: nanEpoch 289/800, Training Loss: nan, Validation Loss: nanEpoch 290/800, Training Loss: nan, Validation Loss: nanEpoch 291/800, Training Loss: nan, Validation Loss: nanEpoch 292/800, Training Loss: nan, Validation Loss: nanEpoch 293/800, Training Loss: nan, Validation Loss: nanEpoch 294/800, Training Loss: nan, Validation Loss: nanEpoch 295/800, Training Loss: nan, Validation Loss: nanEpoch 296/800, Training Loss: nan, Validation Loss: nanEpoch 297/800, Training Loss: nan, Validation Loss: nanEpoch 298/800, Training Loss: nan, Validation Loss: nanEpoch 299/800, Training Loss: nan, Validation Loss: nanEpoch 300/800, Training Loss: nan, Validation Loss: nanEpoch 301/800, Training Loss: nan, Validation Loss: nanEpoch 302/800, Training Loss: nan, Validation Loss: nanEpoch 303/800, Training Loss: nan, Validation Loss: nanEpoch 304/800, Training Loss: nan, Validation Loss: nanEpoch 305/800, Training Loss: nan, Validation Loss: nanEpoch 306/800, Training Loss: nan, Validation Loss: nanEpoch 307/800, Training Loss: nan, Validation Loss: nanEpoch 308/800, Training Loss: nan, Validation Loss: nanEpoch 309/800, Training Loss: nan, Validation Loss: nanEpoch 310/800, Training Loss: nan, Validation Loss: nanEpoch 311/800, Training Loss: nan, Validation Loss: nanEpoch 312/800, Training Loss: nan, Validation Loss: nanEpoch 313/800, Training Loss: nan, Validation Loss: nanEpoch 314/800, Training Loss: nan, Validation Loss: nanEpoch 315/800, Training Loss: nan, Validation Loss: nanEpoch 316/800, Training Loss: nan, Validation Loss: nanEpoch 317/800, Training Loss: nan, Validation Loss: nanEpoch 318/800, Training Loss: nan, Validation Loss: nanEpoch 319/800, Training Loss: nan, Validation Loss: nanEpoch 320/800, Training Loss: nan, Validation Loss: nanEpoch 321/800, Training Loss: nan, Validation Loss: nanEpoch 322/800, Training Loss: nan, Validation Loss: nanEpoch 323/800, Training Loss: nan, Validation Loss: nanEpoch 324/800, Training Loss: nan, Validation Loss: nanEpoch 325/800, Training Loss: nan, Validation Loss: nanEpoch 326/800, Training Loss: nan, Validation Loss: nanEpoch 327/800, Training Loss: nan, Validation Loss: nanEpoch 328/800, Training Loss: nan, Validation Loss: nanEpoch 329/800, Training Loss: nan, Validation Loss: nanEpoch 330/800, Training Loss: nan, Validation Loss: nanEpoch 331/800, Training Loss: nan, Validation Loss: nanEpoch 332/800, Training Loss: nan, Validation Loss: nanEpoch 333/800, Training Loss: nan, Validation Loss: nanEpoch 334/800, Training Loss: nan, Validation Loss: nanEpoch 335/800, Training Loss: nan, Validation Loss: nanEpoch 336/800, Training Loss: nan, Validation Loss: nanEpoch 337/800, Training Loss: nan, Validation Loss: nanEpoch 338/800, Training Loss: nan, Validation Loss: nanEpoch 339/800, Training Loss: nan, Validation Loss: nanEpoch 340/800, Training Loss: nan, Validation Loss: nanEpoch 341/800, Training Loss: nan, Validation Loss: nanEpoch 342/800, Training Loss: nan, Validation Loss: nanEpoch 343/800, Training Loss: nan, Validation Loss: nanEpoch 344/800, Training Loss: nan, Validation Loss: nanEpoch 345/800, Training Loss: nan, Validation Loss: nanEpoch 346/800, Training Loss: nan, Validation Loss: nanEpoch 347/800, Training Loss: nan, Validation Loss: nanEpoch 348/800, Training Loss: nan, Validation Loss: nanEpoch 349/800, Training Loss: nan, Validation Loss: nanEpoch 350/800, Training Loss: nan, Validation Loss: nanEpoch 351/800, Training Loss: nan, Validation Loss: nanEpoch 352/800, Training Loss: nan, Validation Loss: nanEpoch 353/800, Training Loss: nan, Validation Loss: nanEpoch 354/800, Training Loss: nan, Validation Loss: nanEpoch 355/800, Training Loss: nan, Validation Loss: nanEpoch 356/800, Training Loss: nan, Validation Loss: nanEpoch 357/800, Training Loss: nan, Validation Loss: nanEpoch 358/800, Training Loss: nan, Validation Loss: nanEpoch 359/800, Training Loss: nan, Validation Loss: nanEpoch 360/800, Training Loss: nan, Validation Loss: nanEpoch 361/800, Training Loss: nan, Validation Loss: nanEpoch 362/800, Training Loss: nan, Validation Loss: nanEpoch 363/800, Training Loss: nan, Validation Loss: nanEpoch 364/800, Training Loss: nan, Validation Loss: nanEpoch 365/800, Training Loss: nan, Validation Loss: nanEpoch 366/800, Training Loss: nan, Validation Loss: nanEpoch 367/800, Training Loss: nan, Validation Loss: nanEpoch 368/800, Training Loss: nan, Validation Loss: nanEpoch 369/800, Training Loss: nan, Validation Loss: nanEpoch 370/800, Training Loss: nan, Validation Loss: nanEpoch 371/800, Training Loss: nan, Validation Loss: nanEpoch 372/800, Training Loss: nan, Validation Loss: nanEpoch 373/800, Training Loss: nan, Validation Loss: nanEpoch 374/800, Training Loss: nan, Validation Loss: nanEpoch 375/800, Training Loss: nan, Validation Loss: nanEpoch 376/800, Training Loss: nan, Validation Loss: nanEpoch 377/800, Training Loss: nan, Validation Loss: nanEpoch 378/800, Training Loss: nan, Validation Loss: nanEpoch 379/800, Training Loss: nan, Validation Loss: nanEpoch 380/800, Training Loss: nan, Validation Loss: nanEpoch 381/800, Training Loss: nan, Validation Loss: nanEpoch 382/800, Training Loss: nan, Validation Loss: nanEpoch 383/800, Training Loss: nan, Validation Loss: nanEpoch 384/800, Training Loss: nan, Validation Loss: nanEpoch 385/800, Training Loss: nan, Validation Loss: nanEpoch 386/800, Training Loss: nan, Validation Loss: nanEpoch 387/800, Training Loss: nan, Validation Loss: nanEpoch 388/800, Training Loss: nan, Validation Loss: nanEpoch 389/800, Training Loss: nan, Validation Loss: nanEpoch 390/800, Training Loss: nan, Validation Loss: nanEpoch 391/800, Training Loss: nan, Validation Loss: nanEpoch 392/800, Training Loss: nan, Validation Loss: nanEpoch 393/800, Training Loss: nan, Validation Loss: nanEpoch 394/800, Training Loss: nan, Validation Loss: nanEpoch 395/800, Training Loss: nan, Validation Loss: nanEpoch 396/800, Training Loss: nan, Validation Loss: nanEpoch 397/800, Training Loss: nan, Validation Loss: nanEpoch 398/800, Training Loss: nan, Validation Loss: nanEpoch 399/800, Training Loss: nan, Validation Loss: nanEpoch 400/800, Training Loss: nan, Validation Loss: nanEpoch 401/800, Training Loss: nan, Validation Loss: nanEpoch 402/800, Training Loss: nan, Validation Loss: nanEpoch 403/800, Training Loss: nan, Validation Loss: nanEpoch 404/800, Training Loss: nan, Validation Loss: nanEpoch 405/800, Training Loss: nan, Validation Loss: nanEpoch 406/800, Training Loss: nan, Validation Loss: nanEpoch 407/800, Training Loss: nan, Validation Loss: nanEpoch 408/800, Training Loss: nan, Validation Loss: nanEpoch 409/800, Training Loss: nan, Validation Loss: nanEpoch 410/800, Training Loss: nan, Validation Loss: nanEpoch 411/800, Training Loss: nan, Validation Loss: nanEpoch 412/800, Training Loss: nan, Validation Loss: nanEpoch 413/800, Training Loss: nan, Validation Loss: nanEpoch 414/800, Training Loss: nan, Validation Loss: nanEpoch 415/800, Training Loss: nan, Validation Loss: nanEpoch 416/800, Training Loss: nan, Validation Loss: nanEpoch 417/800, Training Loss: nan, Validation Loss: nanEpoch 418/800, Training Loss: nan, Validation Loss: nanEpoch 419/800, Training Loss: nan, Validation Loss: nanEpoch 420/800, Training Loss: nan, Validation Loss: nanEpoch 421/800, Training Loss: nan, Validation Loss: nanEpoch 422/800, Training Loss: nan, Validation Loss: nanEpoch 423/800, Training Loss: nan, Validation Loss: nanEpoch 424/800, Training Loss: nan, Validation Loss: nanEpoch 425/800, Training Loss: nan, Validation Loss: nanEpoch 426/800, Training Loss: nan, Validation Loss: nanEpoch 427/800, Training Loss: nan, Validation Loss: nanEpoch 428/800, Training Loss: nan, Validation Loss: nanEpoch 429/800, Training Loss: nan, Validation Loss: nanEpoch 430/800, Training Loss: nan, Validation Loss: nanEpoch 431/800, Training Loss: nan, Validation Loss: nanEpoch 432/800, Training Loss: nan, Validation Loss: nanEpoch 433/800, Training Loss: nan, Validation Loss: nanEpoch 434/800, Training Loss: nan, Validation Loss: nanEpoch 435/800, Training Loss: nan, Validation Loss: nanEpoch 436/800, Training Loss: nan, Validation Loss: nanEpoch 437/800, Training Loss: nan, Validation Loss: nanEpoch 438/800, Training Loss: nan, Validation Loss: nanEpoch 439/800, Training Loss: nan, Validation Loss: nanEpoch 440/800, Training Loss: nan, Validation Loss: nanEpoch 441/800, Training Loss: nan, Validation Loss: nanEpoch 442/800, Training Loss: nan, Validation Loss: nanEpoch 443/800, Training Loss: nan, Validation Loss: nanEpoch 444/800, Training Loss: nan, Validation Loss: nanEpoch 445/800, Training Loss: nan, Validation Loss: nanEpoch 446/800, Training Loss: nan, Validation Loss: nanEpoch 447/800, Training Loss: nan, Validation Loss: nanEpoch 448/800, Training Loss: nan, Validation Loss: nanEpoch 449/800, Training Loss: nan, Validation Loss: nanEpoch 450/800, Training Loss: nan, Validation Loss: nanEpoch 451/800, Training Loss: nan, Validation Loss: nanEpoch 452/800, Training Loss: nan, Validation Loss: nanEpoch 453/800, Training Loss: nan, Validation Loss: nanEpoch 454/800, Training Loss: nan, Validation Loss: nanEpoch 455/800, Training Loss: nan, Validation Loss: nanEpoch 456/800, Training Loss: nan, Validation Loss: nanEpoch 457/800, Training Loss: nan, Validation Loss: nanEpoch 458/800, Training Loss: nan, Validation Loss: nanEpoch 459/800, Training Loss: nan, Validation Loss: nanEpoch 460/800, Training Loss: nan, Validation Loss: nanEpoch 461/800, Training Loss: nan, Validation Loss: nanEpoch 462/800, Training Loss: nan, Validation Loss: nanEpoch 463/800, Training Loss: nan, Validation Loss: nanEpoch 464/800, Training Loss: nan, Validation Loss: nanEpoch 465/800, Training Loss: nan, Validation Loss: nanEpoch 466/800, Training Loss: nan, Validation Loss: nanEpoch 467/800, Training Loss: nan, Validation Loss: nanEpoch 468/800, Training Loss: nan, Validation Loss: nanEpoch 469/800, Training Loss: nan, Validation Loss: nanEpoch 470/800, Training Loss: nan, Validation Loss: nanEpoch 471/800, Training Loss: nan, Validation Loss: nanEpoch 472/800, Training Loss: nan, Validation Loss: nanEpoch 473/800, Training Loss: nan, Validation Loss: nanEpoch 474/800, Training Loss: nan, Validation Loss: nanEpoch 475/800, Training Loss: nan, Validation Loss: nanEpoch 476/800, Training Loss: nan, Validation Loss: nanEpoch 477/800, Training Loss: nan, Validation Loss: nanEpoch 478/800, Training Loss: nan, Validation Loss: nanEpoch 479/800, Training Loss: nan, Validation Loss: nanEpoch 480/800, Training Loss: nan, Validation Loss: nanEpoch 481/800, Training Loss: nan, Validation Loss: nanEpoch 482/800, Training Loss: nan, Validation Loss: nanEpoch 483/800, Training Loss: nan, Validation Loss: nanEpoch 484/800, Training Loss: nan, Validation Loss: nanEpoch 485/800, Training Loss: nan, Validation Loss: nanEpoch 486/800, Training Loss: nan, Validation Loss: nanEpoch 487/800, Training Loss: nan, Validation Loss: nanEpoch 488/800, Training Loss: nan, Validation Loss: nanEpoch 489/800, Training Loss: nan, Validation Loss: nanEpoch 490/800, Training Loss: nan, Validation Loss: nanEpoch 491/800, Training Loss: nan, Validation Loss: nanEpoch 492/800, Training Loss: nan, Validation Loss: nanEpoch 493/800, Training Loss: nan, Validation Loss: nanEpoch 494/800, Training Loss: nan, Validation Loss: nanEpoch 495/800, Training Loss: nan, Validation Loss: nanEpoch 496/800, Training Loss: nan, Validation Loss: nanEpoch 497/800, Training Loss: nan, Validation Loss: nanEpoch 498/800, Training Loss: nan, Validation Loss: nanEpoch 499/800, Training Loss: nan, Validation Loss: nanEpoch 500/800, Training Loss: nan, Validation Loss: nanEpoch 501/800, Training Loss: nan, Validation Loss: nanEpoch 502/800, Training Loss: nan, Validation Loss: nanEpoch 503/800, Training Loss: nan, Validation Loss: nanEpoch 504/800, Training Loss: nan, Validation Loss: nanEpoch 505/800, Training Loss: nan, Validation Loss: nanEpoch 506/800, Training Loss: nan, Validation Loss: nanEpoch 507/800, Training Loss: nan, Validation Loss: nanEpoch 508/800, Training Loss: nan, Validation Loss: nanEpoch 509/800, Training Loss: nan, Validation Loss: nanEpoch 510/800, Training Loss: nan, Validation Loss: nanEpoch 511/800, Training Loss: nan, Validation Loss: nanEpoch 512/800, Training Loss: nan, Validation Loss: nanEpoch 513/800, Training Loss: nan, Validation Loss: nanEpoch 514/800, Training Loss: nan, Validation Loss: nanEpoch 515/800, Training Loss: nan, Validation Loss: nanEpoch 516/800, Training Loss: nan, Validation Loss: nanEpoch 517/800, Training Loss: nan, Validation Loss: nanEpoch 518/800, Training Loss: nan, Validation Loss: nanEpoch 519/800, Training Loss: nan, Validation Loss: nanEpoch 520/800, Training Loss: nan, Validation Loss: nanEpoch 521/800, Training Loss: nan, Validation Loss: nanEpoch 522/800, Training Loss: nan, Validation Loss: nanEpoch 523/800, Training Loss: nan, Validation Loss: nanEpoch 524/800, Training Loss: nan, Validation Loss: nanEpoch 525/800, Training Loss: nan, Validation Loss: nanEpoch 526/800, Training Loss: nan, Validation Loss: nanEpoch 527/800, Training Loss: nan, Validation Loss: nanEpoch 528/800, Training Loss: nan, Validation Loss: nanEpoch 529/800, Training Loss: nan, Validation Loss: nanEpoch 530/800, Training Loss: nan, Validation Loss: nanEpoch 531/800, Training Loss: nan, Validation Loss: nanEpoch 532/800, Training Loss: nan, Validation Loss: nanEpoch 533/800, Training Loss: nan, Validation Loss: nanEpoch 534/800, Training Loss: nan, Validation Loss: nanEpoch 535/800, Training Loss: nan, Validation Loss: nanEpoch 536/800, Training Loss: nan, Validation Loss: nanEpoch 537/800, Training Loss: nan, Validation Loss: nanEpoch 538/800, Training Loss: nan, Validation Loss: nanEpoch 539/800, Training Loss: nan, Validation Loss: nanEpoch 540/800, Training Loss: nan, Validation Loss: nanEpoch 541/800, Training Loss: nan, Validation Loss: nanEpoch 542/800, Training Loss: nan, Validation Loss: nanEpoch 543/800, Training Loss: nan, Validation Loss: nanEpoch 544/800, Training Loss: nan, Validation Loss: nanEpoch 545/800, Training Loss: nan, Validation Loss: nanEpoch 546/800, Training Loss: nan, Validation Loss: nanEpoch 547/800, Training Loss: nan, Validation Loss: nanEpoch 548/800, Training Loss: nan, Validation Loss: nanEpoch 549/800, Training Loss: nan, Validation Loss: nanEpoch 550/800, Training Loss: nan, Validation Loss: nanEpoch 551/800, Training Loss: nan, Validation Loss: nanEpoch 552/800, Training Loss: nan, Validation Loss: nanEpoch 553/800, Training Loss: nan, Validation Loss: nanEpoch 554/800, Training Loss: nan, Validation Loss: nanEpoch 555/800, Training Loss: nan, Validation Loss: nanEpoch 556/800, Training Loss: nan, Validation Loss: nanEpoch 557/800, Training Loss: nan, Validation Loss: nanEpoch 558/800, Training Loss: nan, Validation Loss: nanEpoch 559/800, Training Loss: nan, Validation Loss: nanEpoch 560/800, Training Loss: nan, Validation Loss: nanEpoch 561/800, Training Loss: nan, Validation Loss: nanEpoch 562/800, Training Loss: nan, Validation Loss: nanEpoch 563/800, Training Loss: nan, Validation Loss: nanEpoch 564/800, Training Loss: nan, Validation Loss: nanEpoch 565/800, Training Loss: nan, Validation Loss: nanEpoch 566/800, Training Loss: nan, Validation Loss: nanEpoch 567/800, Training Loss: nan, Validation Loss: nanEpoch 568/800, Training Loss: nan, Validation Loss: nanEpoch 569/800, Training Loss: nan, Validation Loss: nanEpoch 570/800, Training Loss: nan, Validation Loss: nanEpoch 571/800, Training Loss: nan, Validation Loss: nanEpoch 572/800, Training Loss: nan, Validation Loss: nanEpoch 573/800, Training Loss: nan, Validation Loss: nanEpoch 574/800, Training Loss: nan, Validation Loss: nanEpoch 575/800, Training Loss: nan, Validation Loss: nanEpoch 576/800, Training Loss: nan, Validation Loss: nanEpoch 577/800, Training Loss: nan, Validation Loss: nanEpoch 578/800, Training Loss: nan, Validation Loss: nanEpoch 579/800, Training Loss: nan, Validation Loss: nanEpoch 580/800, Training Loss: nan, Validation Loss: nanEpoch 581/800, Training Loss: nan, Validation Loss: nanEpoch 582/800, Training Loss: nan, Validation Loss: nanEpoch 583/800, Training Loss: nan, Validation Loss: nanEpoch 584/800, Training Loss: nan, Validation Loss: nanEpoch 585/800, Training Loss: nan, Validation Loss: nanEpoch 586/800, Training Loss: nan, Validation Loss: nanEpoch 587/800, Training Loss: nan, Validation Loss: nanEpoch 588/800, Training Loss: nan, Validation Loss: nanEpoch 589/800, Training Loss: nan, Validation Loss: nanEpoch 590/800, Training Loss: nan, Validation Loss: nanEpoch 591/800, Training Loss: nan, Validation Loss: nanEpoch 592/800, Training Loss: nan, Validation Loss: nanEpoch 593/800, Training Loss: nan, Validation Loss: nanEpoch 594/800, Training Loss: nan, Validation Loss: nanEpoch 595/800, Training Loss: nan, Validation Loss: nanEpoch 596/800, Training Loss: nan, Validation Loss: nanEpoch 597/800, Training Loss: nan, Validation Loss: nanEpoch 598/800, Training Loss: nan, Validation Loss: nanEpoch 599/800, Training Loss: nan, Validation Loss: nanEpoch 600/800, Training Loss: nan, Validation Loss: nanEpoch 601/800, Training Loss: nan, Validation Loss: nanEpoch 602/800, Training Loss: nan, Validation Loss: nanEpoch 603/800, Training Loss: nan, Validation Loss: nanEpoch 604/800, Training Loss: nan, Validation Loss: nanEpoch 605/800, Training Loss: nan, Validation Loss: nanEpoch 606/800, Training Loss: nan, Validation Loss: nanEpoch 607/800, Training Loss: nan, Validation Loss: nanEpoch 608/800, Training Loss: nan, Validation Loss: nanEpoch 609/800, Training Loss: nan, Validation Loss: nanEpoch 610/800, Training Loss: nan, Validation Loss: nanEpoch 611/800, Training Loss: nan, Validation Loss: nanEpoch 612/800, Training Loss: nan, Validation Loss: nanEpoch 613/800, Training Loss: nan, Validation Loss: nanEpoch 614/800, Training Loss: nan, Validation Loss: nanEpoch 615/800, Training Loss: nan, Validation Loss: nanEpoch 616/800, Training Loss: nan, Validation Loss: nanEpoch 617/800, Training Loss: nan, Validation Loss: nanEpoch 618/800, Training Loss: nan, Validation Loss: nanEpoch 619/800, Training Loss: nan, Validation Loss: nanEpoch 620/800, Training Loss: nan, Validation Loss: nanEpoch 621/800, Training Loss: nan, Validation Loss: nanEpoch 622/800, Training Loss: nan, Validation Loss: nanEpoch 623/800, Training Loss: nan, Validation Loss: nanEpoch 624/800, Training Loss: nan, Validation Loss: nanEpoch 625/800, Training Loss: nan, Validation Loss: nanEpoch 626/800, Training Loss: nan, Validation Loss: nanEpoch 627/800, Training Loss: nan, Validation Loss: nanEpoch 628/800, Training Loss: nan, Validation Loss: nanEpoch 629/800, Training Loss: nan, Validation Loss: nanEpoch 630/800, Training Loss: nan, Validation Loss: nanEpoch 631/800, Training Loss: nan, Validation Loss: nanEpoch 632/800, Training Loss: nan, Validation Loss: nanEpoch 633/800, Training Loss: nan, Validation Loss: nanEpoch 634/800, Training Loss: nan, Validation Loss: nanEpoch 635/800, Training Loss: nan, Validation Loss: nanEpoch 636/800, Training Loss: nan, Validation Loss: nanEpoch 637/800, Training Loss: nan, Validation Loss: nanEpoch 638/800, Training Loss: nan, Validation Loss: nanEpoch 639/800, Training Loss: nan, Validation Loss: nanEpoch 640/800, Training Loss: nan, Validation Loss: nanEpoch 641/800, Training Loss: nan, Validation Loss: nanEpoch 642/800, Training Loss: nan, Validation Loss: nanEpoch 643/800, Training Loss: nan, Validation Loss: nanEpoch 644/800, Training Loss: nan, Validation Loss: nanEpoch 645/800, Training Loss: nan, Validation Loss: nanEpoch 646/800, Training Loss: nan, Validation Loss: nanEpoch 647/800, Training Loss: nan, Validation Loss: nanEpoch 648/800, Training Loss: nan, Validation Loss: nanEpoch 649/800, Training Loss: nan, Validation Loss: nanEpoch 650/800, Training Loss: nan, Validation Loss: nanEpoch 651/800, Training Loss: nan, Validation Loss: nanEpoch 652/800, Training Loss: nan, Validation Loss: nanEpoch 653/800, Training Loss: nan, Validation Loss: nanEpoch 654/800, Training Loss: nan, Validation Loss: nanEpoch 655/800, Training Loss: nan, Validation Loss: nanEpoch 656/800, Training Loss: nan, Validation Loss: nanEpoch 657/800, Training Loss: nan, Validation Loss: nanEpoch 658/800, Training Loss: nan, Validation Loss: nanEpoch 659/800, Training Loss: nan, Validation Loss: nanEpoch 660/800, Training Loss: nan, Validation Loss: nanEpoch 661/800, Training Loss: nan, Validation Loss: nanEpoch 662/800, Training Loss: nan, Validation Loss: nanEpoch 663/800, Training Loss: nan, Validation Loss: nanEpoch 664/800, Training Loss: nan, Validation Loss: nanEpoch 665/800, Training Loss: nan, Validation Loss: nanEpoch 666/800, Training Loss: nan, Validation Loss: nanEpoch 667/800, Training Loss: nan, Validation Loss: nanEpoch 668/800, Training Loss: nan, Validation Loss: nanEpoch 669/800, Training Loss: nan, Validation Loss: nanEpoch 670/800, Training Loss: nan, Validation Loss: nanEpoch 671/800, Training Loss: nan, Validation Loss: nanEpoch 672/800, Training Loss: nan, Validation Loss: nanEpoch 673/800, Training Loss: nan, Validation Loss: nanEpoch 674/800, Training Loss: nan, Validation Loss: nanEpoch 675/800, Training Loss: nan, Validation Loss: nanEpoch 676/800, Training Loss: nan, Validation Loss: nanEpoch 677/800, Training Loss: nan, Validation Loss: nanEpoch 678/800, Training Loss: nan, Validation Loss: nanEpoch 679/800, Training Loss: nan, Validation Loss: nanEpoch 680/800, Training Loss: nan, Validation Loss: nanEpoch 681/800, Training Loss: nan, Validation Loss: nanEpoch 682/800, Training Loss: nan, Validation Loss: nanEpoch 683/800, Training Loss: nan, Validation Loss: nanEpoch 684/800, Training Loss: nan, Validation Loss: nanEpoch 685/800, Training Loss: nan, Validation Loss: nanEpoch 686/800, Training Loss: nan, Validation Loss: nanEpoch 687/800, Training Loss: nan, Validation Loss: nanEpoch 688/800, Training Loss: nan, Validation Loss: nanEpoch 689/800, Training Loss: nan, Validation Loss: nanEpoch 690/800, Training Loss: nan, Validation Loss: nanEpoch 691/800, Training Loss: nan, Validation Loss: nanEpoch 692/800, Training Loss: nan, Validation Loss: nanEpoch 693/800, Training Loss: nan, Validation Loss: nanEpoch 694/800, Training Loss: nan, Validation Loss: nanEpoch 695/800, Training Loss: nan, Validation Loss: nanEpoch 696/800, Training Loss: nan, Validation Loss: nanEpoch 697/800, Training Loss: nan, Validation Loss: nanEpoch 698/800, Training Loss: nan, Validation Loss: nanEpoch 699/800, Training Loss: nan, Validation Loss: nanEpoch 700/800, Training Loss: nan, Validation Loss: nanEpoch 701/800, Training Loss: nan, Validation Loss: nanEpoch 702/800, Training Loss: nan, Validation Loss: nanEpoch 703/800, Training Loss: nan, Validation Loss: nanEpoch 704/800, Training Loss: nan, Validation Loss: nanEpoch 705/800, Training Loss: nan, Validation Loss: nanEpoch 706/800, Training Loss: nan, Validation Loss: nanEpoch 707/800, Training Loss: nan, Validation Loss: nanEpoch 708/800, Training Loss: nan, Validation Loss: nanEpoch 709/800, Training Loss: nan, Validation Loss: nanEpoch 710/800, Training Loss: nan, Validation Loss: nanEpoch 711/800, Training Loss: nan, Validation Loss: nanEpoch 712/800, Training Loss: nan, Validation Loss: nanEpoch 713/800, Training Loss: nan, Validation Loss: nanEpoch 714/800, Training Loss: nan, Validation Loss: nanEpoch 715/800, Training Loss: nan, Validation Loss: nanEpoch 716/800, Training Loss: nan, Validation Loss: nanEpoch 717/800, Training Loss: nan, Validation Loss: nanEpoch 718/800, Training Loss: nan, Validation Loss: nanEpoch 719/800, Training Loss: nan, Validation Loss: nanEpoch 720/800, Training Loss: nan, Validation Loss: nanEpoch 721/800, Training Loss: nan, Validation Loss: nanEpoch 722/800, Training Loss: nan, Validation Loss: nanEpoch 723/800, Training Loss: nan, Validation Loss: nanEpoch 724/800, Training Loss: nan, Validation Loss: nanEpoch 725/800, Training Loss: nan, Validation Loss: nanEpoch 726/800, Training Loss: nan, Validation Loss: nanEpoch 727/800, Training Loss: nan, Validation Loss: nanEpoch 728/800, Training Loss: nan, Validation Loss: nanEpoch 729/800, Training Loss: nan, Validation Loss: nanEpoch 730/800, Training Loss: nan, Validation Loss: nanEpoch 731/800, Training Loss: nan, Validation Loss: nanEpoch 732/800, Training Loss: nan, Validation Loss: nanEpoch 733/800, Training Loss: nan, Validation Loss: nanEpoch 734/800, Training Loss: nan, Validation Loss: nanEpoch 735/800, Training Loss: nan, Validation Loss: nanEpoch 736/800, Training Loss: nan, Validation Loss: nanEpoch 737/800, Training Loss: nan, Validation Loss: nanEpoch 738/800, Training Loss: nan, Validation Loss: nanEpoch 739/800, Training Loss: nan, Validation Loss: nanEpoch 740/800, Training Loss: nan, Validation Loss: nanEpoch 741/800, Training Loss: nan, Validation Loss: nanEpoch 742/800, Training Loss: nan, Validation Loss: nanEpoch 743/800, Training Loss: nan, Validation Loss: nanEpoch 744/800, Training Loss: nan, Validation Loss: nanEpoch 745/800, Training Loss: nan, Validation Loss: nanEpoch 746/800, Training Loss: nan, Validation Loss: nanEpoch 747/800, Training Loss: nan, Validation Loss: nanEpoch 748/800, Training Loss: nan, Validation Loss: nanEpoch 749/800, Training Loss: nan, Validation Loss: nanEpoch 750/800, Training Loss: nan, Validation Loss: nanEpoch 751/800, Training Loss: nan, Validation Loss: nanEpoch 752/800, Training Loss: nan, Validation Loss: nanEpoch 753/800, Training Loss: nan, Validation Loss: nanEpoch 754/800, Training Loss: nan, Validation Loss: nanEpoch 755/800, Training Loss: nan, Validation Loss: nanEpoch 756/800, Training Loss: nan, Validation Loss: nanEpoch 757/800, Training Loss: nan, Validation Loss: nanEpoch 758/800, Training Loss: nan, Validation Loss: nanEpoch 759/800, Training Loss: nan, Validation Loss: nanEpoch 760/800, Training Loss: nan, Validation Loss: nanEpoch 761/800, Training Loss: nan, Validation Loss: nanEpoch 762/800, Training Loss: nan, Validation Loss: nanEpoch 763/800, Training Loss: nan, Validation Loss: nanEpoch 764/800, Training Loss: nan, Validation Loss: nanEpoch 765/800, Training Loss: nan, Validation Loss: nanEpoch 766/800, Training Loss: nan, Validation Loss: nanEpoch 767/800, Training Loss: nan, Validation Loss: nanEpoch 768/800, Training Loss: nan, Validation Loss: nanEpoch 769/800, Training Loss: nan, Validation Loss: nanEpoch 770/800, Training Loss: nan, Validation Loss: nanEpoch 771/800, Training Loss: nan, Validation Loss: nanEpoch 772/800, Training Loss: nan, Validation Loss: nanEpoch 773/800, Training Loss: nan, Validation Loss: nanEpoch 774/800, Training Loss: nan, Validation Loss: nanEpoch 775/800, Training Loss: nan, Validation Loss: nanEpoch 776/800, Training Loss: nan, Validation Loss: nanEpoch 777/800, Training Loss: nan, Validation Loss: nanEpoch 778/800, Training Loss: nan, Validation Loss: nanEpoch 779/800, Training Loss: nan, Validation Loss: nanEpoch 780/800, Training Loss: nan, Validation Loss: nanEpoch 781/800, Training Loss: nan, Validation Loss: nanEpoch 782/800, Training Loss: nan, Validation Loss: nanEpoch 783/800, Training Loss: nan, Validation Loss: nanEpoch 784/800, Training Loss: nan, Validation Loss: nanEpoch 785/800, Training Loss: nan, Validation Loss: nanEpoch 786/800, Training Loss: nan, Validation Loss: nanEpoch 787/800, Training Loss: nan, Validation Loss: nanEpoch 788/800, Training Loss: nan, Validation Loss: nanEpoch 789/800, Training Loss: nan, Validation Loss: nanEpoch 790/800, Training Loss: nan, Validation Loss: nanEpoch 791/800, Training Loss: nan, Validation Loss: nanEpoch 792/800, Training Loss: nan, Validation Loss: nanEpoch 793/800, Training Loss: nan, Validation Loss: nanEpoch 794/800, Training Loss: nan, Validation Loss: nanEpoch 795/800, Training Loss: nan, Validation Loss: nanEpoch 796/800, Training Loss: nan, Validation Loss: nanEpoch 797/800, Training Loss: nan, Validation Loss: nanEpoch 798/800, Training Loss: nan, Validation Loss: nanEpoch 799/800, Training Loss: nan, Validation Loss: nanEpoch 800/800, Training Loss: nan, Validation Loss: nanEpoch 1/100, Training Loss: 0.6949425016097095, Validation Loss: 0.6930594488375511Epoch 2/100, Training Loss: 0.6950057199587482, Validation Loss: 0.6932131092243456Epoch 3/100, Training Loss: 0.6950546569642163, Validation Loss: 0.6932372151999906Epoch 4/100, Training Loss: 0.6950614546990787, Validation Loss: 0.6932405609967687Epoch 5/100, Training Loss: 0.6950617771632139, Validation Loss: 0.693241007633246Epoch 6/100, Training Loss: 0.6950612395206506, Validation Loss: 0.6932410578857764Epoch 7/100, Training Loss: 0.6950606222323905, Validation Loss: 0.6932410558041597Epoch 8/100, Training Loss: 0.6950600300305949, Validation Loss: 0.6932410484386031Epoch 9/100, Training Loss: 0.6950594749458355, Validation Loss: 0.6932410420821704Epoch 10/100, Training Loss: 0.6950589564720897, Validation Loss: 0.6932410374435135Epoch 11/100, Training Loss: 0.6950584725523692, Validation Loss: 0.6932410344747236Epoch 12/100, Training Loss: 0.6950580210711953, Validation Loss: 0.6932410330334488Epoch 13/100, Training Loss: 0.6950576000454969, Validation Loss: 0.6932410329729082Epoch 14/100, Training Loss: 0.6950572076382217, Validation Loss: 0.6932410341533746Epoch 15/100, Training Loss: 0.6950568421488855, Validation Loss: 0.6932410364429267Epoch 16/100, Training Loss: 0.6950565020021349, Validation Loss: 0.6932410397167976Epoch 17/100, Training Loss: 0.6950561857370476, Validation Loss: 0.6932410438565779Epoch 18/100, Training Loss: 0.6950558919974738, Validation Loss: 0.6932410487494531Epoch 19/100, Training Loss: 0.6950556195233134, Validation Loss: 0.693241054287476Epoch 20/100, Training Loss: 0.6950553671426658, Validation Loss: 0.6932410603668805Epoch 21/100, Training Loss: 0.6950551337647365, Validation Loss: 0.6932410668874234Epoch 22/100, Training Loss: 0.6950549183734356, Validation Loss: 0.6932410737517432Epoch 23/100, Training Loss: 0.6950547200215811, Validation Loss: 0.6932410808647342Epoch 24/100, Training Loss: 0.6950545378256638, Validation Loss: 0.6932410881329268Epoch 25/100, Training Loss: 0.6950543709610921, Validation Loss: 0.6932410954638601Epoch 26/100, Training Loss: 0.695054218657898, Validation Loss: 0.6932411027654484Epoch 27/100, Training Loss: 0.6950540801968257, Validation Loss: 0.6932411099453237Epoch 28/100, Training Loss: 0.6950539549057979, Validation Loss: 0.6932411169101538Epoch 29/100, Training Loss: 0.6950538421566931, Validation Loss: 0.6932411235649154Epoch 30/100, Training Loss: 0.6950537413624281, Validation Loss: 0.6932411298121224Epoch 31/100, Training Loss: 0.6950536519743016, Validation Loss: 0.6932411355509902Epoch 32/100, Training Loss: 0.6950535734795779, Validation Loss: 0.6932411406765173Epoch 33/100, Training Loss: 0.695053505399288, Validation Loss: 0.6932411450784844Epoch 34/100, Training Loss: 0.6950534472862275, Validation Loss: 0.69324114864033Epoch 35/100, Training Loss: 0.6950533987231337, Validation Loss: 0.693241151237898Epoch 36/100, Training Loss: 0.6950533593210053, Validation Loss: 0.6932411527380198Epoch 37/100, Training Loss: 0.6950533287175785, Validation Loss: 0.6932411529969139Epoch 38/100, Training Loss: 0.6950533065759144, Validation Loss: 0.6932411518583428Epoch 39/100, Training Loss: 0.6950532925830938, Validation Loss: 0.6932411491515074Epoch 40/100, Training Loss: 0.6950532864489941, Validation Loss: 0.6932411446886145Epoch 41/100, Training Loss: 0.6950532879051385, Validation Loss: 0.6932411382620514Epoch 42/100, Training Loss: 0.6950532967035793, Validation Loss: 0.6932411296410959Epoch 43/100, Training Loss: 0.6950533126158148, Validation Loss: 0.6932411185680608Epoch 44/100, Training Loss: 0.6950533354316808, Validation Loss: 0.6932411047537649Epoch 45/100, Training Loss: 0.6950533649582176, Validation Loss: 0.6932410878721791Epoch 46/100, Training Loss: 0.6950534010184299, Validation Loss: 0.6932410675540764Epoch 47/100, Training Loss: 0.6950534434499243, Validation Loss: 0.6932410433794637Epoch 48/100, Training Loss: 0.6950534921033281, Validation Loss: 0.6932410148685246Epoch 49/100, Training Loss: 0.6950535468404224, Validation Loss: 0.6932409814707257Epoch 50/100, Training Loss: 0.6950536075318661, Validation Loss: 0.6932409425516517Epoch 51/100, Training Loss: 0.6950536740543679, Validation Loss: 0.6932408973770245Epoch 52/100, Training Loss: 0.6950537462871205, Validation Loss: 0.6932408450931924Epoch 53/100, Training Loss: 0.6950538241072358, Validation Loss: 0.6932407847031967Epoch 54/100, Training Loss: 0.6950539073838513, Validation Loss: 0.6932407150372304Epoch 55/100, Training Loss: 0.6950539959704496, Validation Loss: 0.6932406347159832Epoch 56/100, Training Loss: 0.6950540896947947, Validation Loss: 0.693240542104858Epoch 57/100, Training Loss: 0.695054188345649, Validation Loss: 0.6932404352564212Epoch 58/100, Training Loss: 0.6950542916551448, Validation Loss: 0.6932403118375665Epoch 59/100, Training Loss: 0.695054399275268, Validation Loss: 0.6932401690366479Epoch 60/100, Training Loss: 0.6950545107462842, Validation Loss: 0.6932400034441576Epoch 61/100, Training Loss: 0.6950546254540938, Validation Loss: 0.693239810898141Epoch 62/100, Training Loss: 0.6950547425722486, Validation Loss: 0.693239586282149Epoch 63/100, Training Loss: 0.695054860982535, Validation Loss: 0.6932393232586623Epoch 64/100, Training Loss: 0.6950549791653181, Validation Loss: 0.6932390139137962Epoch 65/100, Training Loss: 0.695055095046827, Validation Loss: 0.693238648278585Epoch 66/100, Training Loss: 0.6950552057843797, Validation Loss: 0.6932382136763565Epoch 67/100, Training Loss: 0.6950553074611037, Validation Loss: 0.6932376938216506Epoch 68/100, Training Loss: 0.6950553946468666, Validation Loss: 0.6932370675588492Epoch 69/100, Training Loss: 0.6950554597585185, Validation Loss: 0.6932363070697354Epoch 70/100, Training Loss: 0.6950554921141434, Validation Loss: 0.6932353752842687Epoch 71/100, Training Loss: 0.695055476512368, Validation Loss: 0.693234222072355Epoch 72/100, Training Loss: 0.6950553910598003, Validation Loss: 0.6932327785302471Epoch 73/100, Training Loss: 0.695055203781674, Validation Loss: 0.6932309482171694Epoch 74/100, Training Loss: 0.6950548672144555, Validation Loss: 0.6932285933797128Epoch 75/100, Training Loss: 0.6950543095581347, Validation Loss: 0.6932255126913459Epoch 76/100, Training Loss: 0.6950534197783377, Validation Loss: 0.6932214041410031Epoch 77/100, Training Loss: 0.6950520216858289, Validation Loss: 0.6932158009235956Epoch 78/100, Training Loss: 0.6950498271037139, Validation Loss: 0.6932079560673312Epoch 79/100, Training Loss: 0.6950463474535199, Validation Loss: 0.6931966246900515Epoch 80/100, Training Loss: 0.6950407179931182, Validation Loss: 0.693179629363918Epoch 81/100, Training Loss: 0.6950313261965336, Validation Loss: 0.6931529324191108Epoch 82/100, Training Loss: 0.6950149649925395, Validation Loss: 0.6931084874992779Epoch 83/100, Training Loss: 0.6949847159491013, Validation Loss: 0.6930287318469265Epoch 84/100, Training Loss: 0.6949239949916726, Validation Loss: 0.6928705046502255Epoch 85/100, Training Loss: 0.6947869915162094, Validation Loss: 0.6925092416363904Epoch 86/100, Training Loss: 0.6944191308746503, Validation Loss: 0.6914929335936972Epoch 87/100, Training Loss: 0.6931173678897375, Validation Loss: 0.6875088713473678Epoch 88/100, Training Loss: 0.6859091740032027, Validation Loss: 0.6614606354437061Epoch 89/100, Training Loss: 0.6275627064350966, Validation Loss: 0.4574268347754901Epoch 90/100, Training Loss: 0.4161297140744028, Validation Loss: 0.25309176382646886Epoch 91/100, Training Loss: 0.3464215450781481, Validation Loss: 0.2422627403967769Epoch 92/100, Training Loss: 0.3500637617302565, Validation Loss: 0.24415666930987667Epoch 93/100, Training Loss: 0.3560378397688059, Validation Loss: 0.24712318292649763Epoch 94/100, Training Loss: 0.36060242927979025, Validation Loss: 0.2497574939185939Epoch 95/100, Training Loss: 0.3639405859074597, Validation Loss: 0.25193306209771327Epoch 96/100, Training Loss: 0.3663567373604729, Validation Loss: 0.2536987881064511Epoch 97/100, Training Loss: 0.3680628705712818, Validation Loss: 0.2551139505027269Epoch 98/100, Training Loss: 0.3692137222934407, Validation Loss: 0.25622796178206814Epoch 99/100, Training Loss: 0.3699294237475228, Validation Loss: 0.25708146025464057Epoch 100/100, Training Loss: 0.37030628710165603, Validation Loss: 0.25770975176890104Epoch 1/300, Training Loss: 0.6949694300829026, Validation Loss: 0.6931117185155383Epoch 2/300, Training Loss: 0.6950900329620666, Validation Loss: 0.6932306936446165Epoch 3/300, Training Loss: 0.695126920901191, Validation Loss: 0.6932474361346895Epoch 4/300, Training Loss: 0.6951310226828127, Validation Loss: 0.6932495666972093Epoch 5/300, Training Loss: 0.6951305712568494, Validation Loss: 0.693249795778368Epoch 6/300, Training Loss: 0.6951295423202217, Validation Loss: 0.6932497793314822Epoch 7/300, Training Loss: 0.695128452371793, Validation Loss: 0.6932497307035163Epoch 8/300, Training Loss: 0.6951273681317863, Validation Loss: 0.6932496774903407Epoch 9/300, Training Loss: 0.6951262983953684, Validation Loss: 0.693249623377643Epoch 10/300, Training Loss: 0.6951252444711093, Validation Loss: 0.6932495689605898Epoch 11/300, Training Loss: 0.6951242066733573, Validation Loss: 0.6932495144253793Epoch 12/300, Training Loss: 0.6951231851619026, Validation Loss: 0.6932494598960087Epoch 13/300, Training Loss: 0.6951221800536008, Validation Loss: 0.6932494054800922Epoch 14/300, Training Loss: 0.6951211914398654, Validation Loss: 0.6932493512757575Epoch 15/300, Training Loss: 0.6951202193917013, Validation Loss: 0.6932492973734216Epoch 16/300, Training Loss: 0.69511926396287, Validation Loss: 0.6932492438568187Epoch 17/300, Training Loss: 0.6951183251925726, Validation Loss: 0.6932491908038572Epoch 18/300, Training Loss: 0.6951174031078656, Validation Loss: 0.6932491382873802Epoch 19/300, Training Loss: 0.695116497725854, Validation Loss: 0.6932490863758678Epoch 20/300, Training Loss: 0.6951156090556986, Validation Loss: 0.6932490351340631Epoch 21/300, Training Loss: 0.6951147371004538, Validation Loss: 0.6932489846235521Epoch 22/300, Training Loss: 0.6951138818587637, Validation Loss: 0.693248934903292Epoch 23/300, Training Loss: 0.6951130433264379, Validation Loss: 0.6932488860300982Epoch 24/300, Training Loss: 0.6951122214979166, Validation Loss: 0.6932488380590907Epoch 25/300, Training Loss: 0.6951114163676554, Validation Loss: 0.693248791044113Epoch 26/300, Training Loss: 0.6951106279314305, Validation Loss: 0.6932487450381173Epoch 27/300, Training Loss: 0.6951098561875969, Validation Loss: 0.693248700093526Epoch 28/300, Training Loss: 0.6951091011382945, Validation Loss: 0.6932486562625759Epoch 29/300, Training Loss: 0.6951083627906324, Validation Loss: 0.6932486135976397Epoch 30/300, Training Loss: 0.695107641157856, Validation Loss: 0.6932485721515349Epoch 31/300, Training Loss: 0.6951069362605089, Validation Loss: 0.6932485319778183Epoch 32/300, Training Loss: 0.6951062481276087, Validation Loss: 0.693248493131072Epoch 33/300, Training Loss: 0.6951055767978451, Validation Loss: 0.6932484556671794Epoch 34/300, Training Loss: 0.6951049223208094, Validation Loss: 0.6932484196435942Epoch 35/300, Training Loss: 0.6951042847582772, Validation Loss: 0.6932483851196033Epoch 36/300, Training Loss: 0.6951036641855514, Validation Loss: 0.6932483521565845Epoch 37/300, Training Loss: 0.6951030606928815, Validation Loss: 0.6932483208182589Epoch 38/300, Training Loss: 0.695102474386972, Validation Loss: 0.6932482911709369Epoch 39/300, Training Loss: 0.6951019053926004, Validation Loss: 0.693248263283755Epoch 40/300, Training Loss: 0.6951013538543561, Validation Loss: 0.6932482372289057Epoch 41/300, Training Loss: 0.6951008199385226, Validation Loss: 0.6932482130818496Epoch 42/300, Training Loss: 0.6951003038351187, Validation Loss: 0.6932481909215096Epoch 43/300, Training Loss: 0.6950998057601167, Validation Loss: 0.6932481708304359Epoch 44/300, Training Loss: 0.6950993259578735, Validation Loss: 0.6932481528949337Epoch 45/300, Training Loss: 0.6950988647037852, Validation Loss: 0.6932481372051352Epoch 46/300, Training Loss: 0.6950984223071897, Validation Loss: 0.6932481238549981Epoch 47/300, Training Loss: 0.6950979991145609, Validation Loss: 0.6932481129422056Epoch 48/300, Training Loss: 0.6950975955130055, Validation Loss: 0.6932481045679327Epoch 49/300, Training Loss: 0.6950972119341039, Validation Loss: 0.6932480988364297Epoch 50/300, Training Loss: 0.6950968488581235, Validation Loss: 0.6932480958543653Epoch 51/300, Training Loss: 0.6950965068186293, Validation Loss: 0.6932480957298502Epoch 52/300, Training Loss: 0.6950961864075218, Validation Loss: 0.6932480985710293Epoch 53/300, Training Loss: 0.6950958882805243, Validation Loss: 0.6932481044841059Epoch 54/300, Training Loss: 0.6950956131631361, Validation Loss: 0.6932481135706042Epoch 55/300, Training Loss: 0.695095361857017, Validation Loss: 0.6932481259236234Epoch 56/300, Training Loss: 0.6950951352468216, Validation Loss: 0.6932481416227316Epoch 57/300, Training Loss: 0.6950949343073508, Validation Loss: 0.6932481607270566Epoch 58/300, Training Loss: 0.695094760110933, Validation Loss: 0.6932481832659325Epoch 59/300, Training Loss: 0.6950946138347803, Validation Loss: 0.693248209226267Epoch 60/300, Training Loss: 0.6950944967679583, Validation Loss: 0.6932482385354464Epoch 61/300, Training Loss: 0.6950944103173964, Validation Loss: 0.6932482710381757Epoch 62/300, Training Loss: 0.6950943560120502, Validation Loss: 0.6932483064649871Epoch 63/300, Training Loss: 0.6950943355038686, Validation Loss: 0.6932483443892695Epoch 64/300, Training Loss: 0.6950943505635373, Validation Loss: 0.6932483841683162Epoch 65/300, Training Loss: 0.6950944030678998, Validation Loss: 0.693248424861997Epoch 66/300, Training Loss: 0.6950944949743885, Validation Loss: 0.6932484651197696Epoch 67/300, Training Loss: 0.6950946282753936, Validation Loss: 0.6932485030224987Epoch 68/300, Training Loss: 0.6950948049216994, Validation Loss: 0.6932485358590391Epoch 69/300, Training Loss: 0.6950950266983177, Validation Loss: 0.6932485598075575Epoch 70/300, Training Loss: 0.6950952950267736, Validation Loss: 0.6932485694759627Epoch 71/300, Training Loss: 0.6950956106530739, Validation Loss: 0.6932485572309717Epoch 72/300, Training Loss: 0.6950959731564452, Validation Loss: 0.6932485122051648Epoch 73/300, Training Loss: 0.6950963801739272, Validation Loss: 0.6932484188049994Epoch 74/300, Training Loss: 0.6950968261683688, Validation Loss: 0.6932482544306952Epoch 75/300, Training Loss: 0.6950973004507686, Validation Loss: 0.6932479859250702Epoch 76/300, Training Loss: 0.695097783961881, Validation Loss: 0.6932475639241218Epoch 77/300, Training Loss: 0.6950982439437048, Validation Loss: 0.6932469136519701Epoch 78/300, Training Loss: 0.6950986249308967, Validation Loss: 0.6932459195105077Epoch 79/300, Training Loss: 0.6950988331344975, Validation Loss: 0.6932443984717358Epoch 80/300, Training Loss: 0.695098708554721, Validation Loss: 0.6932420524786762Epoch 81/300, Training Loss: 0.695097973393493, Validation Loss: 0.6932383797228513Epoch 82/300, Training Loss: 0.6950961325345306, Validation Loss: 0.6932325011153869Epoch 83/300, Training Loss: 0.6950922716432067, Validation Loss: 0.6932228009365904Epoch 84/300, Training Loss: 0.6950846217684945, Validation Loss: 0.6932061296081318Epoch 85/300, Training Loss: 0.6950695469761297, Validation Loss: 0.6931758786026283Epoch 86/300, Training Loss: 0.6950389561432758, Validation Loss: 0.6931168072283329Epoch 87/300, Training Loss: 0.694972818419324, Validation Loss: 0.6929890591701742Epoch 88/300, Training Loss: 0.6948135893932027, Validation Loss: 0.6926684006749105Epoch 89/300, Training Loss: 0.6943550476365965, Validation Loss: 0.691650776198077Epoch 90/300, Training Loss: 0.6925323201521499, Validation Loss: 0.6867477625293511Epoch 91/300, Training Loss: 0.6781262348393579, Validation Loss: 0.6306307437498944Epoch 92/300, Training Loss: 0.5338713943120379, Validation Loss: 0.3297310036292068Epoch 93/300, Training Loss: 0.4112053015424416, Validation Loss: 0.28642724092614075Epoch 94/300, Training Loss: 0.3785731823944171, Validation Loss: 0.2839710204077933Epoch 95/300, Training Loss: 0.38327636697397605, Validation Loss: 0.29971898458936436Epoch 96/300, Training Loss: 0.39965281074819325, Validation Loss: 0.3179137636138992Epoch 97/300, Training Loss: 0.413959267090764, Validation Loss: 0.33484784423058944Epoch 98/300, Training Loss: 0.4251158969690242, Validation Loss: 0.3488675786828612Epoch 99/300, Training Loss: 0.43246127637909354, Validation Loss: 0.3573791778642896Epoch 100/300, Training Loss: 0.4384645142741026, Validation Loss: 0.361953156640313Epoch 101/300, Training Loss: 0.44769423493143173, Validation Loss: 0.36778638674157416Epoch 102/300, Training Loss: 0.4549904931207648, Validation Loss: 0.37594960562169116Epoch 103/300, Training Loss: 0.4644997224013795, Validation Loss: 0.38705743293222555Epoch 104/300, Training Loss: 0.4714200859975578, Validation Loss: 0.39523112328454285Epoch 105/300, Training Loss: 0.47373062197207416, Validation Loss: 0.4128043435206722Epoch 106/300, Training Loss: 0.48599440050523957, Validation Loss: 0.44049692125676565Epoch 107/300, Training Loss: 0.49547057483635537, Validation Loss: 0.45826744679166087Epoch 108/300, Training Loss: 0.5204326786497596, Validation Loss: 0.41498100123350384Epoch 109/300, Training Loss: 0.487805887088678, Validation Loss: 0.3063370169453775Epoch 110/300, Training Loss: 0.455245706920555, Validation Loss: 0.2217287038694286Epoch 111/300, Training Loss: 0.4328930030294208, Validation Loss: 0.2638898604928547Epoch 112/300, Training Loss: 0.40964589751171676, Validation Loss: 0.284928306315546Epoch 113/300, Training Loss: 0.3956930526457183, Validation Loss: 0.279530840201081Epoch 114/300, Training Loss: 0.40219541177511564, Validation Loss: 0.2524376923515381Epoch 115/300, Training Loss: 0.39271798174619604, Validation Loss: 0.2820231256135736Epoch 116/300, Training Loss: 0.4039214132464298, Validation Loss: 0.28161266879392055Epoch 117/300, Training Loss: 0.4125001609273924, Validation Loss: 0.27811696325567375Epoch 118/300, Training Loss: 0.40873258405896556, Validation Loss: 0.3304546740675196Epoch 119/300, Training Loss: 0.41105101278867834, Validation Loss: 0.3398774857114718Epoch 120/300, Training Loss: 0.41953242226054027, Validation Loss: 0.3256863448230888Epoch 121/300, Training Loss: 0.41044253550722354, Validation Loss: 0.3305160460133244Epoch 122/300, Training Loss: 0.45643336702393533, Validation Loss: 0.34743690762585216Epoch 123/300, Training Loss: 0.4569786442869753, Validation Loss: 0.3413438222138265Epoch 124/300, Training Loss: 0.47678023590759083, Validation Loss: 0.46837065156427604Epoch 125/300, Training Loss: 0.6738826575777793, Validation Loss: 0.9254428042550428Epoch 126/300, Training Loss: 1.0199622208281078, Validation Loss: 1.2252298613438433Epoch 127/300, Training Loss: 1.3294205993795365, Validation Loss: 1.1692053398375049Epoch 128/300, Training Loss: 1.2443897551650998, Validation Loss: 1.1062409596613312Epoch 129/300, Training Loss: 1.482020052102248, Validation Loss: 0.3485586784908631Epoch 130/300, Training Loss: 1.904122715504614, Validation Loss: 1.901460983357552Epoch 131/300, Training Loss: nan, Validation Loss: nanEpoch 132/300, Training Loss: nan, Validation Loss: nanEpoch 133/300, Training Loss: nan, Validation Loss: nanEpoch 134/300, Training Loss: nan, Validation Loss: nanEpoch 135/300, Training Loss: nan, Validation Loss: nanEpoch 136/300, Training Loss: nan, Validation Loss: nanEpoch 137/300, Training Loss: nan, Validation Loss: nanEpoch 138/300, Training Loss: nan, Validation Loss: nanEpoch 139/300, Training Loss: nan, Validation Loss: nanEpoch 140/300, Training Loss: nan, Validation Loss: nanEpoch 141/300, Training Loss: nan, Validation Loss: nanEpoch 142/300, Training Loss: nan, Validation Loss: nanEpoch 143/300, Training Loss: nan, Validation Loss: nanEpoch 144/300, Training Loss: nan, Validation Loss: nanEpoch 145/300, Training Loss: nan, Validation Loss: nanEpoch 146/300, Training Loss: nan, Validation Loss: nanEpoch 147/300, Training Loss: nan, Validation Loss: nanEpoch 148/300, Training Loss: nan, Validation Loss: nanEpoch 149/300, Training Loss: nan, Validation Loss: nanEpoch 150/300, Training Loss: nan, Validation Loss: nanEpoch 151/300, Training Loss: nan, Validation Loss: nanEpoch 152/300, Training Loss: nan, Validation Loss: nanEpoch 153/300, Training Loss: nan, Validation Loss: nanEpoch 154/300, Training Loss: nan, Validation Loss: nanEpoch 155/300, Training Loss: nan, Validation Loss: nanEpoch 156/300, Training Loss: nan, Validation Loss: nanEpoch 157/300, Training Loss: nan, Validation Loss: nanEpoch 158/300, Training Loss: nan, Validation Loss: nanEpoch 159/300, Training Loss: nan, Validation Loss: nanEpoch 160/300, Training Loss: nan, Validation Loss: nanEpoch 161/300, Training Loss: nan, Validation Loss: nanEpoch 162/300, Training Loss: nan, Validation Loss: nanEpoch 163/300, Training Loss: nan, Validation Loss: nanEpoch 164/300, Training Loss: nan, Validation Loss: nanEpoch 165/300, Training Loss: nan, Validation Loss: nanEpoch 166/300, Training Loss: nan, Validation Loss: nanEpoch 167/300, Training Loss: nan, Validation Loss: nanEpoch 168/300, Training Loss: nan, Validation Loss: nanEpoch 169/300, Training Loss: nan, Validation Loss: nanEpoch 170/300, Training Loss: nan, Validation Loss: nanEpoch 171/300, Training Loss: nan, Validation Loss: nanEpoch 172/300, Training Loss: nan, Validation Loss: nanEpoch 173/300, Training Loss: nan, Validation Loss: nanEpoch 174/300, Training Loss: nan, Validation Loss: nanEpoch 175/300, Training Loss: nan, Validation Loss: nanEpoch 176/300, Training Loss: nan, Validation Loss: nanEpoch 177/300, Training Loss: nan, Validation Loss: nanEpoch 178/300, Training Loss: nan, Validation Loss: nanEpoch 179/300, Training Loss: nan, Validation Loss: nanEpoch 180/300, Training Loss: nan, Validation Loss: nanEpoch 181/300, Training Loss: nan, Validation Loss: nanEpoch 182/300, Training Loss: nan, Validation Loss: nanEpoch 183/300, Training Loss: nan, Validation Loss: nanEpoch 184/300, Training Loss: nan, Validation Loss: nanEpoch 185/300, Training Loss: nan, Validation Loss: nanEpoch 186/300, Training Loss: nan, Validation Loss: nanEpoch 187/300, Training Loss: nan, Validation Loss: nanEpoch 188/300, Training Loss: nan, Validation Loss: nanEpoch 189/300, Training Loss: nan, Validation Loss: nanEpoch 190/300, Training Loss: nan, Validation Loss: nanEpoch 191/300, Training Loss: nan, Validation Loss: nanEpoch 192/300, Training Loss: nan, Validation Loss: nanEpoch 193/300, Training Loss: nan, Validation Loss: nanEpoch 194/300, Training Loss: nan, Validation Loss: nanEpoch 195/300, Training Loss: nan, Validation Loss: nanEpoch 196/300, Training Loss: nan, Validation Loss: nanEpoch 197/300, Training Loss: nan, Validation Loss: nanEpoch 198/300, Training Loss: nan, Validation Loss: nanEpoch 199/300, Training Loss: nan, Validation Loss: nanEpoch 200/300, Training Loss: nan, Validation Loss: nanEpoch 201/300, Training Loss: nan, Validation Loss: nanEpoch 202/300, Training Loss: nan, Validation Loss: nanEpoch 203/300, Training Loss: nan, Validation Loss: nanEpoch 204/300, Training Loss: nan, Validation Loss: nanEpoch 205/300, Training Loss: nan, Validation Loss: nanEpoch 206/300, Training Loss: nan, Validation Loss: nanEpoch 207/300, Training Loss: nan, Validation Loss: nanEpoch 208/300, Training Loss: nan, Validation Loss: nanEpoch 209/300, Training Loss: nan, Validation Loss: nanEpoch 210/300, Training Loss: nan, Validation Loss: nanEpoch 211/300, Training Loss: nan, Validation Loss: nanEpoch 212/300, Training Loss: nan, Validation Loss: nanEpoch 213/300, Training Loss: nan, Validation Loss: nanEpoch 214/300, Training Loss: nan, Validation Loss: nanEpoch 215/300, Training Loss: nan, Validation Loss: nanEpoch 216/300, Training Loss: nan, Validation Loss: nanEpoch 217/300, Training Loss: nan, Validation Loss: nanEpoch 218/300, Training Loss: nan, Validation Loss: nanEpoch 219/300, Training Loss: nan, Validation Loss: nanEpoch 220/300, Training Loss: nan, Validation Loss: nanEpoch 221/300, Training Loss: nan, Validation Loss: nanEpoch 222/300, Training Loss: nan, Validation Loss: nanEpoch 223/300, Training Loss: nan, Validation Loss: nanEpoch 224/300, Training Loss: nan, Validation Loss: nanEpoch 225/300, Training Loss: nan, Validation Loss: nanEpoch 226/300, Training Loss: nan, Validation Loss: nanEpoch 227/300, Training Loss: nan, Validation Loss: nanEpoch 228/300, Training Loss: nan, Validation Loss: nanEpoch 229/300, Training Loss: nan, Validation Loss: nanEpoch 230/300, Training Loss: nan, Validation Loss: nanEpoch 231/300, Training Loss: nan, Validation Loss: nanEpoch 232/300, Training Loss: nan, Validation Loss: nanEpoch 233/300, Training Loss: nan, Validation Loss: nanEpoch 234/300, Training Loss: nan, Validation Loss: nanEpoch 235/300, Training Loss: nan, Validation Loss: nanEpoch 236/300, Training Loss: nan, Validation Loss: nanEpoch 237/300, Training Loss: nan, Validation Loss: nanEpoch 238/300, Training Loss: nan, Validation Loss: nanEpoch 239/300, Training Loss: nan, Validation Loss: nanEpoch 240/300, Training Loss: nan, Validation Loss: nanEpoch 241/300, Training Loss: nan, Validation Loss: nanEpoch 242/300, Training Loss: nan, Validation Loss: nanEpoch 243/300, Training Loss: nan, Validation Loss: nanEpoch 244/300, Training Loss: nan, Validation Loss: nanEpoch 245/300, Training Loss: nan, Validation Loss: nanEpoch 246/300, Training Loss: nan, Validation Loss: nanEpoch 247/300, Training Loss: nan, Validation Loss: nanEpoch 248/300, Training Loss: nan, Validation Loss: nanEpoch 249/300, Training Loss: nan, Validation Loss: nanEpoch 250/300, Training Loss: nan, Validation Loss: nanEpoch 251/300, Training Loss: nan, Validation Loss: nanEpoch 252/300, Training Loss: nan, Validation Loss: nanEpoch 253/300, Training Loss: nan, Validation Loss: nanEpoch 254/300, Training Loss: nan, Validation Loss: nanEpoch 255/300, Training Loss: nan, Validation Loss: nanEpoch 256/300, Training Loss: nan, Validation Loss: nanEpoch 257/300, Training Loss: nan, Validation Loss: nanEpoch 258/300, Training Loss: nan, Validation Loss: nanEpoch 259/300, Training Loss: nan, Validation Loss: nanEpoch 260/300, Training Loss: nan, Validation Loss: nanEpoch 261/300, Training Loss: nan, Validation Loss: nanEpoch 262/300, Training Loss: nan, Validation Loss: nanEpoch 263/300, Training Loss: nan, Validation Loss: nanEpoch 264/300, Training Loss: nan, Validation Loss: nanEpoch 265/300, Training Loss: nan, Validation Loss: nanEpoch 266/300, Training Loss: nan, Validation Loss: nanEpoch 267/300, Training Loss: nan, Validation Loss: nanEpoch 268/300, Training Loss: nan, Validation Loss: nanEpoch 269/300, Training Loss: nan, Validation Loss: nanEpoch 270/300, Training Loss: nan, Validation Loss: nanEpoch 271/300, Training Loss: nan, Validation Loss: nanEpoch 272/300, Training Loss: nan, Validation Loss: nanEpoch 273/300, Training Loss: nan, Validation Loss: nanEpoch 274/300, Training Loss: nan, Validation Loss: nanEpoch 275/300, Training Loss: nan, Validation Loss: nanEpoch 276/300, Training Loss: nan, Validation Loss: nanEpoch 277/300, Training Loss: nan, Validation Loss: nanEpoch 278/300, Training Loss: nan, Validation Loss: nanEpoch 279/300, Training Loss: nan, Validation Loss: nanEpoch 280/300, Training Loss: nan, Validation Loss: nanEpoch 281/300, Training Loss: nan, Validation Loss: nanEpoch 282/300, Training Loss: nan, Validation Loss: nanEpoch 283/300, Training Loss: nan, Validation Loss: nanEpoch 284/300, Training Loss: nan, Validation Loss: nanEpoch 285/300, Training Loss: nan, Validation Loss: nanEpoch 286/300, Training Loss: nan, Validation Loss: nanEpoch 287/300, Training Loss: nan, Validation Loss: nanEpoch 288/300, Training Loss: nan, Validation Loss: nanEpoch 289/300, Training Loss: nan, Validation Loss: nanEpoch 290/300, Training Loss: nan, Validation Loss: nanEpoch 291/300, Training Loss: nan, Validation Loss: nanEpoch 292/300, Training Loss: nan, Validation Loss: nanEpoch 293/300, Training Loss: nan, Validation Loss: nanEpoch 294/300, Training Loss: nan, Validation Loss: nanEpoch 295/300, Training Loss: nan, Validation Loss: nanEpoch 296/300, Training Loss: nan, Validation Loss: nanEpoch 297/300, Training Loss: nan, Validation Loss: nanEpoch 298/300, Training Loss: nan, Validation Loss: nanEpoch 299/300, Training Loss: nan, Validation Loss: nanEpoch 300/300, Training Loss: nan, Validation Loss: nanEpoch 1/500, Training Loss: 0.6949198478975072, Validation Loss: 0.6930637344390853Epoch 2/500, Training Loss: 0.6949917013673643, Validation Loss: 0.6932149078783848Epoch 3/500, Training Loss: 0.6950406592603205, Validation Loss: 0.6932388630515229Epoch 4/500, Training Loss: 0.6950481578029841, Validation Loss: 0.6932422193062819Epoch 5/500, Training Loss: 0.6950491578093629, Validation Loss: 0.6932426603145104Epoch 6/500, Training Loss: 0.6950492317981279, Validation Loss: 0.6932426968210206Epoch 7/500, Training Loss: 0.6950491670700681, Validation Loss: 0.6932426783267617Epoch 8/500, Training Loss: 0.695049072854192, Validation Loss: 0.6932426531837319Epoch 9/500, Training Loss: 0.6950489639383087, Validation Loss: 0.6932426279204467Epoch 10/500, Training Loss: 0.6950488419683354, Validation Loss: 0.6932426032712475Epoch 11/500, Training Loss: 0.6950487067388974, Validation Loss: 0.693242579177711Epoch 12/500, Training Loss: 0.6950485577566963, Validation Loss: 0.6932425554809691Epoch 13/500, Training Loss: 0.695048394452674, Validation Loss: 0.6932425320157888Epoch 14/500, Training Loss: 0.6950482162061817, Validation Loss: 0.6932425086217104Epoch 15/500, Training Loss: 0.6950480223426215, Validation Loss: 0.6932424851427773Epoch 16/500, Training Loss: 0.6950478121268772, Validation Loss: 0.6932424614257316Epoch 17/500, Training Loss: 0.6950475847555132, Validation Loss: 0.6932424373180065Epoch 18/500, Training Loss: 0.6950473393480356, Validation Loss: 0.6932424126656506Epoch 19/500, Training Loss: 0.6950470749370933, Validation Loss: 0.6932423873111506Epoch 20/500, Training Loss: 0.6950467904574203, Validation Loss: 0.6932423610910944Epoch 21/500, Training Loss: 0.6950464847333097, Validation Loss: 0.6932423338336002Epoch 22/500, Training Loss: 0.6950461564643197, Validation Loss: 0.6932423053554311Epoch 23/500, Training Loss: 0.6950458042088891, Validation Loss: 0.693242275458689Epoch 24/500, Training Loss: 0.6950454263654773, Validation Loss: 0.6932422439269768Epoch 25/500, Training Loss: 0.6950450211507209, Validation Loss: 0.6932422105208617Epoch 26/500, Training Loss: 0.6950445865740564, Validation Loss: 0.6932421749724671Epoch 27/500, Training Loss: 0.6950441204080662, Validation Loss: 0.693242136978941Epoch 28/500, Training Loss: 0.6950436201536863, Validation Loss: 0.6932420961944988Epoch 29/500, Training Loss: 0.6950430829991668, Validation Loss: 0.6932420522206477Epoch 30/500, Training Loss: 0.6950425057714514, Validation Loss: 0.6932420045940665Epoch 31/500, Training Loss: 0.6950418848782322, Validation Loss: 0.6932419527714769Epoch 32/500, Training Loss: 0.6950412162385604, Validation Loss: 0.6932418961106023Epoch 33/500, Training Loss: 0.6950404951992286, Validation Loss: 0.693241833846024Epoch 34/500, Training Loss: 0.6950397164334443, Validation Loss: 0.6932417650583301Epoch 35/500, Training Loss: 0.6950388738171973, Validation Loss: 0.693241688634384Epoch 36/500, Training Loss: 0.6950379602774223, Validation Loss: 0.6932416032157234Epoch 37/500, Training Loss: 0.695036967604126, Validation Loss: 0.6932415071309885Epoch 38/500, Training Loss: 0.6950358862161187, Validation Loss: 0.6932413983066154Epoch 39/500, Training Loss: 0.6950347048664167, Validation Loss: 0.693241274147684Epoch 40/500, Training Loss: 0.6950334102684418, Validation Loss: 0.6932411313773283Epoch 41/500, Training Loss: 0.6950319866170613, Validation Loss: 0.693240965817943Epoch 42/500, Training Loss: 0.6950304149684002, Validation Loss: 0.6932407720896149Epoch 43/500, Training Loss: 0.6950286724275299, Validation Loss: 0.6932405431892457Epoch 44/500, Training Loss: 0.6950267310711942, Validation Loss: 0.6932402698951892Epoch 45/500, Training Loss: 0.6950245564995595, Validation Loss: 0.6932399399126556Epoch 46/500, Training Loss: 0.6950221058599948, Validation Loss: 0.6932395366272958Epoch 47/500, Training Loss: 0.6950193251057512, Validation Loss: 0.6932390372553237Epoch 48/500, Training Loss: 0.695016145123696, Validation Loss: 0.6932384100448364Epoch 49/500, Training Loss: 0.6950124761531976, Validation Loss: 0.6932376099508749Epoch 50/500, Training Loss: 0.6950081995591181, Validation Loss: 0.6932365717923491Epoch 51/500, Training Loss: 0.6950031553946191, Validation Loss: 0.6932351991348997Epoch 52/500, Training Loss: 0.6949971230556234, Validation Loss: 0.693233345684311Epoch 53/500, Training Loss: 0.6949897901980189, Validation Loss: 0.6932307830726703Epoch 54/500, Training Loss: 0.6949807009042359, Validation Loss: 0.6932271428768005Epoch 55/500, Training Loss: 0.694969165442266, Validation Loss: 0.6932218074462437Epoch 56/500, Training Loss: 0.6949540950292759, Validation Loss: 0.6932136931663064Epoch 57/500, Training Loss: 0.6949336805981811, Validation Loss: 0.6932007921107842Epoch 58/500, Training Loss: 0.6949047214574897, Validation Loss: 0.6931791253920399Epoch 59/500, Training Loss: 0.6948610914391884, Validation Loss: 0.6931401137189599Epoch 60/500, Training Loss: 0.6947898155611238, Validation Loss: 0.6930631132603533Epoch 61/500, Training Loss: 0.6946594323511714, Validation Loss: 0.6928904915640717Epoch 62/500, Training Loss: 0.694377657288602, Validation Loss: 0.6924231764740888Epoch 63/500, Training Loss: 0.6935833378372976, Validation Loss: 0.6907058533783595Epoch 64/500, Training Loss: 0.6900080604748069, Validation Loss: 0.6799128768967962Epoch 65/500, Training Loss: 0.659117966607058, Validation Loss: 0.5548602187440591Epoch 66/500, Training Loss: 0.45713027036245996, Validation Loss: 0.231343962801518Epoch 67/500, Training Loss: 0.423429219075926, Validation Loss: 0.2864160650797577Epoch 68/500, Training Loss: 0.5156043051748316, Validation Loss: 0.3933557816069909Epoch 69/500, Training Loss: 0.6356495647143066, Validation Loss: 0.6108822302457113Epoch 70/500, Training Loss: 0.7874274646378054, Validation Loss: 0.702811424312357Epoch 71/500, Training Loss: 1.0259694740367142, Validation Loss: 1.2373238463253844Epoch 72/500, Training Loss: 2.058827701068657, Validation Loss: 1.6969245177705286Epoch 73/500, Training Loss: nan, Validation Loss: nanEpoch 74/500, Training Loss: nan, Validation Loss: nanEpoch 75/500, Training Loss: nan, Validation Loss: nanEpoch 76/500, Training Loss: nan, Validation Loss: nanEpoch 77/500, Training Loss: nan, Validation Loss: nanEpoch 78/500, Training Loss: nan, Validation Loss: nanEpoch 79/500, Training Loss: nan, Validation Loss: nanEpoch 80/500, Training Loss: nan, Validation Loss: nanEpoch 81/500, Training Loss: nan, Validation Loss: nanEpoch 82/500, Training Loss: nan, Validation Loss: nanEpoch 83/500, Training Loss: nan, Validation Loss: nanEpoch 84/500, Training Loss: nan, Validation Loss: nanEpoch 85/500, Training Loss: nan, Validation Loss: nanEpoch 86/500, Training Loss: nan, Validation Loss: nanEpoch 87/500, Training Loss: nan, Validation Loss: nanEpoch 88/500, Training Loss: nan, Validation Loss: nanEpoch 89/500, Training Loss: nan, Validation Loss: nanEpoch 90/500, Training Loss: nan, Validation Loss: nanEpoch 91/500, Training Loss: nan, Validation Loss: nanEpoch 92/500, Training Loss: nan, Validation Loss: nanEpoch 93/500, Training Loss: nan, Validation Loss: nanEpoch 94/500, Training Loss: nan, Validation Loss: nanEpoch 95/500, Training Loss: nan, Validation Loss: nanEpoch 96/500, Training Loss: nan, Validation Loss: nanEpoch 97/500, Training Loss: nan, Validation Loss: nanEpoch 98/500, Training Loss: nan, Validation Loss: nanEpoch 99/500, Training Loss: nan, Validation Loss: nanEpoch 100/500, Training Loss: nan, Validation Loss: nanEpoch 101/500, Training Loss: nan, Validation Loss: nanEpoch 102/500, Training Loss: nan, Validation Loss: nanEpoch 103/500, Training Loss: nan, Validation Loss: nanEpoch 104/500, Training Loss: nan, Validation Loss: nanEpoch 105/500, Training Loss: nan, Validation Loss: nanEpoch 106/500, Training Loss: nan, Validation Loss: nanEpoch 107/500, Training Loss: nan, Validation Loss: nanEpoch 108/500, Training Loss: nan, Validation Loss: nanEpoch 109/500, Training Loss: nan, Validation Loss: nanEpoch 110/500, Training Loss: nan, Validation Loss: nanEpoch 111/500, Training Loss: nan, Validation Loss: nanEpoch 112/500, Training Loss: nan, Validation Loss: nanEpoch 113/500, Training Loss: nan, Validation Loss: nanEpoch 114/500, Training Loss: nan, Validation Loss: nanEpoch 115/500, Training Loss: nan, Validation Loss: nanEpoch 116/500, Training Loss: nan, Validation Loss: nanEpoch 117/500, Training Loss: nan, Validation Loss: nanEpoch 118/500, Training Loss: nan, Validation Loss: nanEpoch 119/500, Training Loss: nan, Validation Loss: nanEpoch 120/500, Training Loss: nan, Validation Loss: nanEpoch 121/500, Training Loss: nan, Validation Loss: nanEpoch 122/500, Training Loss: nan, Validation Loss: nanEpoch 123/500, Training Loss: nan, Validation Loss: nanEpoch 124/500, Training Loss: nan, Validation Loss: nanEpoch 125/500, Training Loss: nan, Validation Loss: nanEpoch 126/500, Training Loss: nan, Validation Loss: nanEpoch 127/500, Training Loss: nan, Validation Loss: nanEpoch 128/500, Training Loss: nan, Validation Loss: nanEpoch 129/500, Training Loss: nan, Validation Loss: nanEpoch 130/500, Training Loss: nan, Validation Loss: nanEpoch 131/500, Training Loss: nan, Validation Loss: nanEpoch 132/500, Training Loss: nan, Validation Loss: nanEpoch 133/500, Training Loss: nan, Validation Loss: nanEpoch 134/500, Training Loss: nan, Validation Loss: nanEpoch 135/500, Training Loss: nan, Validation Loss: nanEpoch 136/500, Training Loss: nan, Validation Loss: nanEpoch 137/500, Training Loss: nan, Validation Loss: nanEpoch 138/500, Training Loss: nan, Validation Loss: nanEpoch 139/500, Training Loss: nan, Validation Loss: nanEpoch 140/500, Training Loss: nan, Validation Loss: nanEpoch 141/500, Training Loss: nan, Validation Loss: nanEpoch 142/500, Training Loss: nan, Validation Loss: nanEpoch 143/500, Training Loss: nan, Validation Loss: nanEpoch 144/500, Training Loss: nan, Validation Loss: nanEpoch 145/500, Training Loss: nan, Validation Loss: nanEpoch 146/500, Training Loss: nan, Validation Loss: nanEpoch 147/500, Training Loss: nan, Validation Loss: nanEpoch 148/500, Training Loss: nan, Validation Loss: nanEpoch 149/500, Training Loss: nan, Validation Loss: nanEpoch 150/500, Training Loss: nan, Validation Loss: nanEpoch 151/500, Training Loss: nan, Validation Loss: nanEpoch 152/500, Training Loss: nan, Validation Loss: nanEpoch 153/500, Training Loss: nan, Validation Loss: nanEpoch 154/500, Training Loss: nan, Validation Loss: nanEpoch 155/500, Training Loss: nan, Validation Loss: nanEpoch 156/500, Training Loss: nan, Validation Loss: nanEpoch 157/500, Training Loss: nan, Validation Loss: nanEpoch 158/500, Training Loss: nan, Validation Loss: nanEpoch 159/500, Training Loss: nan, Validation Loss: nanEpoch 160/500, Training Loss: nan, Validation Loss: nanEpoch 161/500, Training Loss: nan, Validation Loss: nanEpoch 162/500, Training Loss: nan, Validation Loss: nanEpoch 163/500, Training Loss: nan, Validation Loss: nanEpoch 164/500, Training Loss: nan, Validation Loss: nanEpoch 165/500, Training Loss: nan, Validation Loss: nanEpoch 166/500, Training Loss: nan, Validation Loss: nanEpoch 167/500, Training Loss: nan, Validation Loss: nanEpoch 168/500, Training Loss: nan, Validation Loss: nanEpoch 169/500, Training Loss: nan, Validation Loss: nanEpoch 170/500, Training Loss: nan, Validation Loss: nanEpoch 171/500, Training Loss: nan, Validation Loss: nanEpoch 172/500, Training Loss: nan, Validation Loss: nanEpoch 173/500, Training Loss: nan, Validation Loss: nanEpoch 174/500, Training Loss: nan, Validation Loss: nanEpoch 175/500, Training Loss: nan, Validation Loss: nanEpoch 176/500, Training Loss: nan, Validation Loss: nanEpoch 177/500, Training Loss: nan, Validation Loss: nanEpoch 178/500, Training Loss: nan, Validation Loss: nanEpoch 179/500, Training Loss: nan, Validation Loss: nanEpoch 180/500, Training Loss: nan, Validation Loss: nanEpoch 181/500, Training Loss: nan, Validation Loss: nanEpoch 182/500, Training Loss: nan, Validation Loss: nanEpoch 183/500, Training Loss: nan, Validation Loss: nanEpoch 184/500, Training Loss: nan, Validation Loss: nanEpoch 185/500, Training Loss: nan, Validation Loss: nanEpoch 186/500, Training Loss: nan, Validation Loss: nanEpoch 187/500, Training Loss: nan, Validation Loss: nanEpoch 188/500, Training Loss: nan, Validation Loss: nanEpoch 189/500, Training Loss: nan, Validation Loss: nanEpoch 190/500, Training Loss: nan, Validation Loss: nanEpoch 191/500, Training Loss: nan, Validation Loss: nanEpoch 192/500, Training Loss: nan, Validation Loss: nanEpoch 193/500, Training Loss: nan, Validation Loss: nanEpoch 194/500, Training Loss: nan, Validation Loss: nanEpoch 195/500, Training Loss: nan, Validation Loss: nanEpoch 196/500, Training Loss: nan, Validation Loss: nanEpoch 197/500, Training Loss: nan, Validation Loss: nanEpoch 198/500, Training Loss: nan, Validation Loss: nanEpoch 199/500, Training Loss: nan, Validation Loss: nanEpoch 200/500, Training Loss: nan, Validation Loss: nanEpoch 201/500, Training Loss: nan, Validation Loss: nanEpoch 202/500, Training Loss: nan, Validation Loss: nanEpoch 203/500, Training Loss: nan, Validation Loss: nanEpoch 204/500, Training Loss: nan, Validation Loss: nanEpoch 205/500, Training Loss: nan, Validation Loss: nanEpoch 206/500, Training Loss: nan, Validation Loss: nanEpoch 207/500, Training Loss: nan, Validation Loss: nanEpoch 208/500, Training Loss: nan, Validation Loss: nanEpoch 209/500, Training Loss: nan, Validation Loss: nanEpoch 210/500, Training Loss: nan, Validation Loss: nanEpoch 211/500, Training Loss: nan, Validation Loss: nanEpoch 212/500, Training Loss: nan, Validation Loss: nanEpoch 213/500, Training Loss: nan, Validation Loss: nanEpoch 214/500, Training Loss: nan, Validation Loss: nanEpoch 215/500, Training Loss: nan, Validation Loss: nanEpoch 216/500, Training Loss: nan, Validation Loss: nanEpoch 217/500, Training Loss: nan, Validation Loss: nanEpoch 218/500, Training Loss: nan, Validation Loss: nanEpoch 219/500, Training Loss: nan, Validation Loss: nanEpoch 220/500, Training Loss: nan, Validation Loss: nanEpoch 221/500, Training Loss: nan, Validation Loss: nanEpoch 222/500, Training Loss: nan, Validation Loss: nanEpoch 223/500, Training Loss: nan, Validation Loss: nanEpoch 224/500, Training Loss: nan, Validation Loss: nanEpoch 225/500, Training Loss: nan, Validation Loss: nanEpoch 226/500, Training Loss: nan, Validation Loss: nanEpoch 227/500, Training Loss: nan, Validation Loss: nanEpoch 228/500, Training Loss: nan, Validation Loss: nanEpoch 229/500, Training Loss: nan, Validation Loss: nanEpoch 230/500, Training Loss: nan, Validation Loss: nanEpoch 231/500, Training Loss: nan, Validation Loss: nanEpoch 232/500, Training Loss: nan, Validation Loss: nanEpoch 233/500, Training Loss: nan, Validation Loss: nanEpoch 234/500, Training Loss: nan, Validation Loss: nanEpoch 235/500, Training Loss: nan, Validation Loss: nanEpoch 236/500, Training Loss: nan, Validation Loss: nanEpoch 237/500, Training Loss: nan, Validation Loss: nanEpoch 238/500, Training Loss: nan, Validation Loss: nanEpoch 239/500, Training Loss: nan, Validation Loss: nanEpoch 240/500, Training Loss: nan, Validation Loss: nanEpoch 241/500, Training Loss: nan, Validation Loss: nanEpoch 242/500, Training Loss: nan, Validation Loss: nanEpoch 243/500, Training Loss: nan, Validation Loss: nanEpoch 244/500, Training Loss: nan, Validation Loss: nanEpoch 245/500, Training Loss: nan, Validation Loss: nanEpoch 246/500, Training Loss: nan, Validation Loss: nanEpoch 247/500, Training Loss: nan, Validation Loss: nanEpoch 248/500, Training Loss: nan, Validation Loss: nanEpoch 249/500, Training Loss: nan, Validation Loss: nanEpoch 250/500, Training Loss: nan, Validation Loss: nanEpoch 251/500, Training Loss: nan, Validation Loss: nanEpoch 252/500, Training Loss: nan, Validation Loss: nanEpoch 253/500, Training Loss: nan, Validation Loss: nanEpoch 254/500, Training Loss: nan, Validation Loss: nanEpoch 255/500, Training Loss: nan, Validation Loss: nanEpoch 256/500, Training Loss: nan, Validation Loss: nanEpoch 257/500, Training Loss: nan, Validation Loss: nanEpoch 258/500, Training Loss: nan, Validation Loss: nanEpoch 259/500, Training Loss: nan, Validation Loss: nanEpoch 260/500, Training Loss: nan, Validation Loss: nanEpoch 261/500, Training Loss: nan, Validation Loss: nanEpoch 262/500, Training Loss: nan, Validation Loss: nanEpoch 263/500, Training Loss: nan, Validation Loss: nanEpoch 264/500, Training Loss: nan, Validation Loss: nanEpoch 265/500, Training Loss: nan, Validation Loss: nanEpoch 266/500, Training Loss: nan, Validation Loss: nanEpoch 267/500, Training Loss: nan, Validation Loss: nanEpoch 268/500, Training Loss: nan, Validation Loss: nanEpoch 269/500, Training Loss: nan, Validation Loss: nanEpoch 270/500, Training Loss: nan, Validation Loss: nanEpoch 271/500, Training Loss: nan, Validation Loss: nanEpoch 272/500, Training Loss: nan, Validation Loss: nanEpoch 273/500, Training Loss: nan, Validation Loss: nanEpoch 274/500, Training Loss: nan, Validation Loss: nanEpoch 275/500, Training Loss: nan, Validation Loss: nanEpoch 276/500, Training Loss: nan, Validation Loss: nanEpoch 277/500, Training Loss: nan, Validation Loss: nanEpoch 278/500, Training Loss: nan, Validation Loss: nanEpoch 279/500, Training Loss: nan, Validation Loss: nanEpoch 280/500, Training Loss: nan, Validation Loss: nanEpoch 281/500, Training Loss: nan, Validation Loss: nanEpoch 282/500, Training Loss: nan, Validation Loss: nanEpoch 283/500, Training Loss: nan, Validation Loss: nanEpoch 284/500, Training Loss: nan, Validation Loss: nanEpoch 285/500, Training Loss: nan, Validation Loss: nanEpoch 286/500, Training Loss: nan, Validation Loss: nanEpoch 287/500, Training Loss: nan, Validation Loss: nanEpoch 288/500, Training Loss: nan, Validation Loss: nanEpoch 289/500, Training Loss: nan, Validation Loss: nanEpoch 290/500, Training Loss: nan, Validation Loss: nanEpoch 291/500, Training Loss: nan, Validation Loss: nanEpoch 292/500, Training Loss: nan, Validation Loss: nanEpoch 293/500, Training Loss: nan, Validation Loss: nanEpoch 294/500, Training Loss: nan, Validation Loss: nanEpoch 295/500, Training Loss: nan, Validation Loss: nanEpoch 296/500, Training Loss: nan, Validation Loss: nanEpoch 297/500, Training Loss: nan, Validation Loss: nanEpoch 298/500, Training Loss: nan, Validation Loss: nanEpoch 299/500, Training Loss: nan, Validation Loss: nanEpoch 300/500, Training Loss: nan, Validation Loss: nanEpoch 301/500, Training Loss: nan, Validation Loss: nanEpoch 302/500, Training Loss: nan, Validation Loss: nanEpoch 303/500, Training Loss: nan, Validation Loss: nanEpoch 304/500, Training Loss: nan, Validation Loss: nanEpoch 305/500, Training Loss: nan, Validation Loss: nanEpoch 306/500, Training Loss: nan, Validation Loss: nanEpoch 307/500, Training Loss: nan, Validation Loss: nanEpoch 308/500, Training Loss: nan, Validation Loss: nanEpoch 309/500, Training Loss: nan, Validation Loss: nanEpoch 310/500, Training Loss: nan, Validation Loss: nanEpoch 311/500, Training Loss: nan, Validation Loss: nanEpoch 312/500, Training Loss: nan, Validation Loss: nanEpoch 313/500, Training Loss: nan, Validation Loss: nanEpoch 314/500, Training Loss: nan, Validation Loss: nanEpoch 315/500, Training Loss: nan, Validation Loss: nanEpoch 316/500, Training Loss: nan, Validation Loss: nanEpoch 317/500, Training Loss: nan, Validation Loss: nanEpoch 318/500, Training Loss: nan, Validation Loss: nanEpoch 319/500, Training Loss: nan, Validation Loss: nanEpoch 320/500, Training Loss: nan, Validation Loss: nanEpoch 321/500, Training Loss: nan, Validation Loss: nanEpoch 322/500, Training Loss: nan, Validation Loss: nanEpoch 323/500, Training Loss: nan, Validation Loss: nanEpoch 324/500, Training Loss: nan, Validation Loss: nanEpoch 325/500, Training Loss: nan, Validation Loss: nanEpoch 326/500, Training Loss: nan, Validation Loss: nanEpoch 327/500, Training Loss: nan, Validation Loss: nanEpoch 328/500, Training Loss: nan, Validation Loss: nanEpoch 329/500, Training Loss: nan, Validation Loss: nanEpoch 330/500, Training Loss: nan, Validation Loss: nanEpoch 331/500, Training Loss: nan, Validation Loss: nanEpoch 332/500, Training Loss: nan, Validation Loss: nanEpoch 333/500, Training Loss: nan, Validation Loss: nanEpoch 334/500, Training Loss: nan, Validation Loss: nanEpoch 335/500, Training Loss: nan, Validation Loss: nanEpoch 336/500, Training Loss: nan, Validation Loss: nanEpoch 337/500, Training Loss: nan, Validation Loss: nanEpoch 338/500, Training Loss: nan, Validation Loss: nanEpoch 339/500, Training Loss: nan, Validation Loss: nanEpoch 340/500, Training Loss: nan, Validation Loss: nanEpoch 341/500, Training Loss: nan, Validation Loss: nanEpoch 342/500, Training Loss: nan, Validation Loss: nanEpoch 343/500, Training Loss: nan, Validation Loss: nanEpoch 344/500, Training Loss: nan, Validation Loss: nanEpoch 345/500, Training Loss: nan, Validation Loss: nanEpoch 346/500, Training Loss: nan, Validation Loss: nanEpoch 347/500, Training Loss: nan, Validation Loss: nanEpoch 348/500, Training Loss: nan, Validation Loss: nanEpoch 349/500, Training Loss: nan, Validation Loss: nanEpoch 350/500, Training Loss: nan, Validation Loss: nanEpoch 351/500, Training Loss: nan, Validation Loss: nanEpoch 352/500, Training Loss: nan, Validation Loss: nanEpoch 353/500, Training Loss: nan, Validation Loss: nanEpoch 354/500, Training Loss: nan, Validation Loss: nanEpoch 355/500, Training Loss: nan, Validation Loss: nanEpoch 356/500, Training Loss: nan, Validation Loss: nanEpoch 357/500, Training Loss: nan, Validation Loss: nanEpoch 358/500, Training Loss: nan, Validation Loss: nanEpoch 359/500, Training Loss: nan, Validation Loss: nanEpoch 360/500, Training Loss: nan, Validation Loss: nanEpoch 361/500, Training Loss: nan, Validation Loss: nanEpoch 362/500, Training Loss: nan, Validation Loss: nanEpoch 363/500, Training Loss: nan, Validation Loss: nanEpoch 364/500, Training Loss: nan, Validation Loss: nanEpoch 365/500, Training Loss: nan, Validation Loss: nanEpoch 366/500, Training Loss: nan, Validation Loss: nanEpoch 367/500, Training Loss: nan, Validation Loss: nanEpoch 368/500, Training Loss: nan, Validation Loss: nanEpoch 369/500, Training Loss: nan, Validation Loss: nanEpoch 370/500, Training Loss: nan, Validation Loss: nanEpoch 371/500, Training Loss: nan, Validation Loss: nanEpoch 372/500, Training Loss: nan, Validation Loss: nanEpoch 373/500, Training Loss: nan, Validation Loss: nanEpoch 374/500, Training Loss: nan, Validation Loss: nanEpoch 375/500, Training Loss: nan, Validation Loss: nanEpoch 376/500, Training Loss: nan, Validation Loss: nanEpoch 377/500, Training Loss: nan, Validation Loss: nanEpoch 378/500, Training Loss: nan, Validation Loss: nanEpoch 379/500, Training Loss: nan, Validation Loss: nanEpoch 380/500, Training Loss: nan, Validation Loss: nanEpoch 381/500, Training Loss: nan, Validation Loss: nanEpoch 382/500, Training Loss: nan, Validation Loss: nanEpoch 383/500, Training Loss: nan, Validation Loss: nanEpoch 384/500, Training Loss: nan, Validation Loss: nanEpoch 385/500, Training Loss: nan, Validation Loss: nanEpoch 386/500, Training Loss: nan, Validation Loss: nanEpoch 387/500, Training Loss: nan, Validation Loss: nanEpoch 388/500, Training Loss: nan, Validation Loss: nanEpoch 389/500, Training Loss: nan, Validation Loss: nanEpoch 390/500, Training Loss: nan, Validation Loss: nanEpoch 391/500, Training Loss: nan, Validation Loss: nanEpoch 392/500, Training Loss: nan, Validation Loss: nanEpoch 393/500, Training Loss: nan, Validation Loss: nanEpoch 394/500, Training Loss: nan, Validation Loss: nanEpoch 395/500, Training Loss: nan, Validation Loss: nanEpoch 396/500, Training Loss: nan, Validation Loss: nanEpoch 397/500, Training Loss: nan, Validation Loss: nanEpoch 398/500, Training Loss: nan, Validation Loss: nanEpoch 399/500, Training Loss: nan, Validation Loss: nanEpoch 400/500, Training Loss: nan, Validation Loss: nanEpoch 401/500, Training Loss: nan, Validation Loss: nanEpoch 402/500, Training Loss: nan, Validation Loss: nanEpoch 403/500, Training Loss: nan, Validation Loss: nanEpoch 404/500, Training Loss: nan, Validation Loss: nanEpoch 405/500, Training Loss: nan, Validation Loss: nanEpoch 406/500, Training Loss: nan, Validation Loss: nanEpoch 407/500, Training Loss: nan, Validation Loss: nanEpoch 408/500, Training Loss: nan, Validation Loss: nanEpoch 409/500, Training Loss: nan, Validation Loss: nanEpoch 410/500, Training Loss: nan, Validation Loss: nanEpoch 411/500, Training Loss: nan, Validation Loss: nanEpoch 412/500, Training Loss: nan, Validation Loss: nanEpoch 413/500, Training Loss: nan, Validation Loss: nanEpoch 414/500, Training Loss: nan, Validation Loss: nanEpoch 415/500, Training Loss: nan, Validation Loss: nanEpoch 416/500, Training Loss: nan, Validation Loss: nanEpoch 417/500, Training Loss: nan, Validation Loss: nanEpoch 418/500, Training Loss: nan, Validation Loss: nanEpoch 419/500, Training Loss: nan, Validation Loss: nanEpoch 420/500, Training Loss: nan, Validation Loss: nanEpoch 421/500, Training Loss: nan, Validation Loss: nanEpoch 422/500, Training Loss: nan, Validation Loss: nanEpoch 423/500, Training Loss: nan, Validation Loss: nanEpoch 424/500, Training Loss: nan, Validation Loss: nanEpoch 425/500, Training Loss: nan, Validation Loss: nanEpoch 426/500, Training Loss: nan, Validation Loss: nanEpoch 427/500, Training Loss: nan, Validation Loss: nanEpoch 428/500, Training Loss: nan, Validation Loss: nanEpoch 429/500, Training Loss: nan, Validation Loss: nanEpoch 430/500, Training Loss: nan, Validation Loss: nanEpoch 431/500, Training Loss: nan, Validation Loss: nanEpoch 432/500, Training Loss: nan, Validation Loss: nanEpoch 433/500, Training Loss: nan, Validation Loss: nanEpoch 434/500, Training Loss: nan, Validation Loss: nanEpoch 435/500, Training Loss: nan, Validation Loss: nanEpoch 436/500, Training Loss: nan, Validation Loss: nanEpoch 437/500, Training Loss: nan, Validation Loss: nanEpoch 438/500, Training Loss: nan, Validation Loss: nanEpoch 439/500, Training Loss: nan, Validation Loss: nanEpoch 440/500, Training Loss: nan, Validation Loss: nanEpoch 441/500, Training Loss: nan, Validation Loss: nanEpoch 442/500, Training Loss: nan, Validation Loss: nanEpoch 443/500, Training Loss: nan, Validation Loss: nanEpoch 444/500, Training Loss: nan, Validation Loss: nanEpoch 445/500, Training Loss: nan, Validation Loss: nanEpoch 446/500, Training Loss: nan, Validation Loss: nanEpoch 447/500, Training Loss: nan, Validation Loss: nanEpoch 448/500, Training Loss: nan, Validation Loss: nanEpoch 449/500, Training Loss: nan, Validation Loss: nanEpoch 450/500, Training Loss: nan, Validation Loss: nanEpoch 451/500, Training Loss: nan, Validation Loss: nanEpoch 452/500, Training Loss: nan, Validation Loss: nanEpoch 453/500, Training Loss: nan, Validation Loss: nanEpoch 454/500, Training Loss: nan, Validation Loss: nanEpoch 455/500, Training Loss: nan, Validation Loss: nanEpoch 456/500, Training Loss: nan, Validation Loss: nanEpoch 457/500, Training Loss: nan, Validation Loss: nanEpoch 458/500, Training Loss: nan, Validation Loss: nanEpoch 459/500, Training Loss: nan, Validation Loss: nanEpoch 460/500, Training Loss: nan, Validation Loss: nanEpoch 461/500, Training Loss: nan, Validation Loss: nanEpoch 462/500, Training Loss: nan, Validation Loss: nanEpoch 463/500, Training Loss: nan, Validation Loss: nanEpoch 464/500, Training Loss: nan, Validation Loss: nanEpoch 465/500, Training Loss: nan, Validation Loss: nanEpoch 466/500, Training Loss: nan, Validation Loss: nanEpoch 467/500, Training Loss: nan, Validation Loss: nanEpoch 468/500, Training Loss: nan, Validation Loss: nanEpoch 469/500, Training Loss: nan, Validation Loss: nanEpoch 470/500, Training Loss: nan, Validation Loss: nanEpoch 471/500, Training Loss: nan, Validation Loss: nanEpoch 472/500, Training Loss: nan, Validation Loss: nanEpoch 473/500, Training Loss: nan, Validation Loss: nanEpoch 474/500, Training Loss: nan, Validation Loss: nanEpoch 475/500, Training Loss: nan, Validation Loss: nanEpoch 476/500, Training Loss: nan, Validation Loss: nanEpoch 477/500, Training Loss: nan, Validation Loss: nanEpoch 478/500, Training Loss: nan, Validation Loss: nanEpoch 479/500, Training Loss: nan, Validation Loss: nanEpoch 480/500, Training Loss: nan, Validation Loss: nanEpoch 481/500, Training Loss: nan, Validation Loss: nanEpoch 482/500, Training Loss: nan, Validation Loss: nanEpoch 483/500, Training Loss: nan, Validation Loss: nanEpoch 484/500, Training Loss: nan, Validation Loss: nanEpoch 485/500, Training Loss: nan, Validation Loss: nanEpoch 486/500, Training Loss: nan, Validation Loss: nanEpoch 487/500, Training Loss: nan, Validation Loss: nanEpoch 488/500, Training Loss: nan, Validation Loss: nanEpoch 489/500, Training Loss: nan, Validation Loss: nanEpoch 490/500, Training Loss: nan, Validation Loss: nanEpoch 491/500, Training Loss: nan, Validation Loss: nanEpoch 492/500, Training Loss: nan, Validation Loss: nanEpoch 493/500, Training Loss: nan, Validation Loss: nanEpoch 494/500, Training Loss: nan, Validation Loss: nanEpoch 495/500, Training Loss: nan, Validation Loss: nanEpoch 496/500, Training Loss: nan, Validation Loss: nanEpoch 497/500, Training Loss: nan, Validation Loss: nanEpoch 498/500, Training Loss: nan, Validation Loss: nanEpoch 499/500, Training Loss: nan, Validation Loss: nanEpoch 500/500, Training Loss: nan, Validation Loss: nanEpoch 1/800, Training Loss: 0.6949400130273876, Validation Loss: 0.6930768479735847Epoch 2/800, Training Loss: 0.6950290123439371, Validation Loss: 0.6932184953865941Epoch 3/800, Training Loss: 0.6950740505241517, Validation Loss: 0.6932401366323884Epoch 4/800, Training Loss: 0.6950801717955783, Validation Loss: 0.6932431954052016Epoch 5/800, Training Loss: 0.6950804656367662, Validation Loss: 0.6932437025778454Epoch 6/800, Training Loss: 0.6950799716017739, Validation Loss: 0.6932438612711543Epoch 7/800, Training Loss: 0.6950793790781257, Validation Loss: 0.6932439696345459Epoch 8/800, Training Loss: 0.695078780948944, Validation Loss: 0.6932440682611171Epoch 9/800, Training Loss: 0.6950781894583147, Validation Loss: 0.6932441629735902Epoch 10/800, Training Loss: 0.6950776059301844, Validation Loss: 0.693244254884686Epoch 11/800, Training Loss: 0.6950770301745397, Validation Loss: 0.6932443444568436Epoch 12/800, Training Loss: 0.6950764617563716, Validation Loss: 0.6932444320535318Epoch 13/800, Training Loss: 0.695075900166044, Validation Loss: 0.6932445180167085Epoch 14/800, Training Loss: 0.6950753448407061, Validation Loss: 0.6932446026800799Epoch 15/800, Training Loss: 0.6950747951648666, Validation Loss: 0.6932446863735198Epoch 16/800, Training Loss: 0.6950742504673617, Validation Loss: 0.6932447694261789Epoch 17/800, Training Loss: 0.695073710016924, Validation Loss: 0.6932448521693413Epoch 18/800, Training Loss: 0.6950731730165386, Validation Loss: 0.6932449349391879Epoch 19/800, Training Loss: 0.6950726385964308, Validation Loss: 0.6932450180795305Epoch 20/800, Training Loss: 0.6950721058054785, Validation Loss: 0.693245101944534Epoch 21/800, Training Loss: 0.6950715736007582, Validation Loss: 0.6932451869014551Epoch 22/800, Training Loss: 0.6950710408348739, Validation Loss: 0.69324527333339Epoch 23/800, Training Loss: 0.6950705062406258, Validation Loss: 0.6932453616420393Epoch 24/800, Training Loss: 0.6950699684124662, Validation Loss: 0.6932454522504531Epoch 25/800, Training Loss: 0.6950694257840218, Validation Loss: 0.6932455456057054Epoch 26/800, Training Loss: 0.695068876600811, Validation Loss: 0.6932456421814079Epoch 27/800, Training Loss: 0.6950683188869735, Validation Loss: 0.6932457424799041Epoch 28/800, Training Loss: 0.6950677504045752, Validation Loss: 0.6932458470339191Epoch 29/800, Training Loss: 0.6950671686035499, Validation Loss: 0.6932459564072975Epoch 30/800, Training Loss: 0.6950665705597839, Validation Loss: 0.6932460711943091Epoch 31/800, Training Loss: 0.6950659528980848, Validation Loss: 0.6932461920167081Epoch 32/800, Training Loss: 0.6950653116956448, Validation Loss: 0.6932463195173709Epoch 33/800, Training Loss: 0.695064642360198, Validation Loss: 0.6932464543487395Epoch 34/800, Training Loss: 0.6950639394749674, Validation Loss: 0.6932465971534227Epoch 35/800, Training Loss: 0.6950631965996444, Validation Loss: 0.6932467485329851Epoch 36/800, Training Loss: 0.6950624060125078, Validation Loss: 0.6932469089989294Epoch 37/800, Training Loss: 0.6950615583728857, Validation Loss: 0.6932470788967183Epoch 38/800, Training Loss: 0.6950606422744603, Validation Loss: 0.6932472582887822Epoch 39/800, Training Loss: 0.6950596436471079, Validation Loss: 0.6932474467746377Epoch 40/800, Training Loss: 0.6950585449454871, Validation Loss: 0.6932476432136876Epoch 41/800, Training Loss: 0.695057324032877, Validation Loss: 0.6932478452956838Epoch 42/800, Training Loss: 0.6950559526221759, Validation Loss: 0.6932480488696118Epoch 43/800, Training Loss: 0.6950543940616097, Validation Loss: 0.6932482468834948Epoch 44/800, Training Loss: 0.6950526001313537, Validation Loss: 0.6932484276865332Epoch 45/800, Training Loss: 0.6950505063140666, Validation Loss: 0.6932485722649615Epoch 46/800, Training Loss: 0.6950480246528634, Validation Loss: 0.6932486496536858Epoch 47/800, Training Loss: 0.6950450326904359, Validation Loss: 0.6932486091444139Epoch 48/800, Training Loss: 0.6950413558457116, Validation Loss: 0.6932483666972978Epoch 49/800, Training Loss: 0.6950367384159597, Validation Loss: 0.6932477804967745Epoch 50/800, Training Loss: 0.6950307940728719, Validation Loss: 0.6932466053484798Epoch 51/800, Training Loss: 0.6950229176757009, Validation Loss: 0.6932444038649372Epoch 52/800, Training Loss: 0.6950121201459589, Validation Loss: 0.6932403643965721Epoch 53/800, Training Loss: 0.6949967004167771, Validation Loss: 0.6932329039349291Epoch 54/800, Training Loss: 0.6949735452832261, Validation Loss: 0.6932187334599168Epoch 55/800, Training Loss: 0.6949364963986325, Validation Loss: 0.6931904365926094Epoch 56/800, Training Loss: 0.6948720845605372, Validation Loss: 0.6931293649303093Epoch 57/800, Training Loss: 0.6947465735594334, Validation Loss: 0.6929809573477499Epoch 58/800, Training Loss: 0.694457257840718, Validation Loss: 0.692545292705317Epoch 59/800, Training Loss: 0.6935771513156919, Validation Loss: 0.6907589085774168Epoch 60/800, Training Loss: 0.6888650282452836, Validation Loss: 0.6756991429511264Epoch 61/800, Training Loss: 0.6162826129730656, Validation Loss: 0.40210500273538397Epoch 62/800, Training Loss: 0.438582760230317, Validation Loss: 0.262533592294706Epoch 63/800, Training Loss: 0.5146048870903039, Validation Loss: 0.4130313512422295Epoch 64/800, Training Loss: 0.625563594385477, Validation Loss: 0.7135004603469319Epoch 65/800, Training Loss: 0.7070945802355176, Validation Loss: 0.7481608369348672Epoch 66/800, Training Loss: 0.7978800271293156, Validation Loss: 1.0752004410082328Epoch 67/800, Training Loss: 1.049136150851616, Validation Loss: 1.2365075305980289Epoch 68/800, Training Loss: 1.3787488150513174, Validation Loss: 1.7469363884336793Epoch 69/800, Training Loss: 2.945653263960138, Validation Loss: 2.031214918704116Epoch 70/800, Training Loss: nan, Validation Loss: nanEpoch 71/800, Training Loss: nan, Validation Loss: nanEpoch 72/800, Training Loss: nan, Validation Loss: nanEpoch 73/800, Training Loss: nan, Validation Loss: nanEpoch 74/800, Training Loss: nan, Validation Loss: nanEpoch 75/800, Training Loss: nan, Validation Loss: nanEpoch 76/800, Training Loss: nan, Validation Loss: nanEpoch 77/800, Training Loss: nan, Validation Loss: nanEpoch 78/800, Training Loss: nan, Validation Loss: nanEpoch 79/800, Training Loss: nan, Validation Loss: nanEpoch 80/800, Training Loss: nan, Validation Loss: nanEpoch 81/800, Training Loss: nan, Validation Loss: nanEpoch 82/800, Training Loss: nan, Validation Loss: nanEpoch 83/800, Training Loss: nan, Validation Loss: nanEpoch 84/800, Training Loss: nan, Validation Loss: nanEpoch 85/800, Training Loss: nan, Validation Loss: nanEpoch 86/800, Training Loss: nan, Validation Loss: nanEpoch 87/800, Training Loss: nan, Validation Loss: nanEpoch 88/800, Training Loss: nan, Validation Loss: nanEpoch 89/800, Training Loss: nan, Validation Loss: nanEpoch 90/800, Training Loss: nan, Validation Loss: nanEpoch 91/800, Training Loss: nan, Validation Loss: nanEpoch 92/800, Training Loss: nan, Validation Loss: nanEpoch 93/800, Training Loss: nan, Validation Loss: nanEpoch 94/800, Training Loss: nan, Validation Loss: nanEpoch 95/800, Training Loss: nan, Validation Loss: nanEpoch 96/800, Training Loss: nan, Validation Loss: nanEpoch 97/800, Training Loss: nan, Validation Loss: nanEpoch 98/800, Training Loss: nan, Validation Loss: nanEpoch 99/800, Training Loss: nan, Validation Loss: nanEpoch 100/800, Training Loss: nan, Validation Loss: nanEpoch 101/800, Training Loss: nan, Validation Loss: nanEpoch 102/800, Training Loss: nan, Validation Loss: nanEpoch 103/800, Training Loss: nan, Validation Loss: nanEpoch 104/800, Training Loss: nan, Validation Loss: nanEpoch 105/800, Training Loss: nan, Validation Loss: nanEpoch 106/800, Training Loss: nan, Validation Loss: nanEpoch 107/800, Training Loss: nan, Validation Loss: nanEpoch 108/800, Training Loss: nan, Validation Loss: nanEpoch 109/800, Training Loss: nan, Validation Loss: nanEpoch 110/800, Training Loss: nan, Validation Loss: nanEpoch 111/800, Training Loss: nan, Validation Loss: nanEpoch 112/800, Training Loss: nan, Validation Loss: nanEpoch 113/800, Training Loss: nan, Validation Loss: nanEpoch 114/800, Training Loss: nan, Validation Loss: nanEpoch 115/800, Training Loss: nan, Validation Loss: nanEpoch 116/800, Training Loss: nan, Validation Loss: nanEpoch 117/800, Training Loss: nan, Validation Loss: nanEpoch 118/800, Training Loss: nan, Validation Loss: nanEpoch 119/800, Training Loss: nan, Validation Loss: nanEpoch 120/800, Training Loss: nan, Validation Loss: nanEpoch 121/800, Training Loss: nan, Validation Loss: nanEpoch 122/800, Training Loss: nan, Validation Loss: nanEpoch 123/800, Training Loss: nan, Validation Loss: nanEpoch 124/800, Training Loss: nan, Validation Loss: nanEpoch 125/800, Training Loss: nan, Validation Loss: nanEpoch 126/800, Training Loss: nan, Validation Loss: nanEpoch 127/800, Training Loss: nan, Validation Loss: nanEpoch 128/800, Training Loss: nan, Validation Loss: nanEpoch 129/800, Training Loss: nan, Validation Loss: nanEpoch 130/800, Training Loss: nan, Validation Loss: nanEpoch 131/800, Training Loss: nan, Validation Loss: nanEpoch 132/800, Training Loss: nan, Validation Loss: nanEpoch 133/800, Training Loss: nan, Validation Loss: nanEpoch 134/800, Training Loss: nan, Validation Loss: nanEpoch 135/800, Training Loss: nan, Validation Loss: nanEpoch 136/800, Training Loss: nan, Validation Loss: nanEpoch 137/800, Training Loss: nan, Validation Loss: nanEpoch 138/800, Training Loss: nan, Validation Loss: nanEpoch 139/800, Training Loss: nan, Validation Loss: nanEpoch 140/800, Training Loss: nan, Validation Loss: nanEpoch 141/800, Training Loss: nan, Validation Loss: nanEpoch 142/800, Training Loss: nan, Validation Loss: nanEpoch 143/800, Training Loss: nan, Validation Loss: nanEpoch 144/800, Training Loss: nan, Validation Loss: nanEpoch 145/800, Training Loss: nan, Validation Loss: nanEpoch 146/800, Training Loss: nan, Validation Loss: nanEpoch 147/800, Training Loss: nan, Validation Loss: nanEpoch 148/800, Training Loss: nan, Validation Loss: nanEpoch 149/800, Training Loss: nan, Validation Loss: nanEpoch 150/800, Training Loss: nan, Validation Loss: nanEpoch 151/800, Training Loss: nan, Validation Loss: nanEpoch 152/800, Training Loss: nan, Validation Loss: nanEpoch 153/800, Training Loss: nan, Validation Loss: nanEpoch 154/800, Training Loss: nan, Validation Loss: nanEpoch 155/800, Training Loss: nan, Validation Loss: nanEpoch 156/800, Training Loss: nan, Validation Loss: nanEpoch 157/800, Training Loss: nan, Validation Loss: nanEpoch 158/800, Training Loss: nan, Validation Loss: nanEpoch 159/800, Training Loss: nan, Validation Loss: nanEpoch 160/800, Training Loss: nan, Validation Loss: nanEpoch 161/800, Training Loss: nan, Validation Loss: nanEpoch 162/800, Training Loss: nan, Validation Loss: nanEpoch 163/800, Training Loss: nan, Validation Loss: nanEpoch 164/800, Training Loss: nan, Validation Loss: nanEpoch 165/800, Training Loss: nan, Validation Loss: nanEpoch 166/800, Training Loss: nan, Validation Loss: nanEpoch 167/800, Training Loss: nan, Validation Loss: nanEpoch 168/800, Training Loss: nan, Validation Loss: nanEpoch 169/800, Training Loss: nan, Validation Loss: nanEpoch 170/800, Training Loss: nan, Validation Loss: nanEpoch 171/800, Training Loss: nan, Validation Loss: nanEpoch 172/800, Training Loss: nan, Validation Loss: nanEpoch 173/800, Training Loss: nan, Validation Loss: nanEpoch 174/800, Training Loss: nan, Validation Loss: nanEpoch 175/800, Training Loss: nan, Validation Loss: nanEpoch 176/800, Training Loss: nan, Validation Loss: nanEpoch 177/800, Training Loss: nan, Validation Loss: nanEpoch 178/800, Training Loss: nan, Validation Loss: nanEpoch 179/800, Training Loss: nan, Validation Loss: nanEpoch 180/800, Training Loss: nan, Validation Loss: nanEpoch 181/800, Training Loss: nan, Validation Loss: nanEpoch 182/800, Training Loss: nan, Validation Loss: nanEpoch 183/800, Training Loss: nan, Validation Loss: nanEpoch 184/800, Training Loss: nan, Validation Loss: nanEpoch 185/800, Training Loss: nan, Validation Loss: nanEpoch 186/800, Training Loss: nan, Validation Loss: nanEpoch 187/800, Training Loss: nan, Validation Loss: nanEpoch 188/800, Training Loss: nan, Validation Loss: nanEpoch 189/800, Training Loss: nan, Validation Loss: nanEpoch 190/800, Training Loss: nan, Validation Loss: nanEpoch 191/800, Training Loss: nan, Validation Loss: nanEpoch 192/800, Training Loss: nan, Validation Loss: nanEpoch 193/800, Training Loss: nan, Validation Loss: nanEpoch 194/800, Training Loss: nan, Validation Loss: nanEpoch 195/800, Training Loss: nan, Validation Loss: nanEpoch 196/800, Training Loss: nan, Validation Loss: nanEpoch 197/800, Training Loss: nan, Validation Loss: nanEpoch 198/800, Training Loss: nan, Validation Loss: nanEpoch 199/800, Training Loss: nan, Validation Loss: nanEpoch 200/800, Training Loss: nan, Validation Loss: nanEpoch 201/800, Training Loss: nan, Validation Loss: nanEpoch 202/800, Training Loss: nan, Validation Loss: nanEpoch 203/800, Training Loss: nan, Validation Loss: nanEpoch 204/800, Training Loss: nan, Validation Loss: nanEpoch 205/800, Training Loss: nan, Validation Loss: nanEpoch 206/800, Training Loss: nan, Validation Loss: nanEpoch 207/800, Training Loss: nan, Validation Loss: nanEpoch 208/800, Training Loss: nan, Validation Loss: nanEpoch 209/800, Training Loss: nan, Validation Loss: nanEpoch 210/800, Training Loss: nan, Validation Loss: nanEpoch 211/800, Training Loss: nan, Validation Loss: nanEpoch 212/800, Training Loss: nan, Validation Loss: nanEpoch 213/800, Training Loss: nan, Validation Loss: nanEpoch 214/800, Training Loss: nan, Validation Loss: nanEpoch 215/800, Training Loss: nan, Validation Loss: nanEpoch 216/800, Training Loss: nan, Validation Loss: nanEpoch 217/800, Training Loss: nan, Validation Loss: nanEpoch 218/800, Training Loss: nan, Validation Loss: nanEpoch 219/800, Training Loss: nan, Validation Loss: nanEpoch 220/800, Training Loss: nan, Validation Loss: nanEpoch 221/800, Training Loss: nan, Validation Loss: nanEpoch 222/800, Training Loss: nan, Validation Loss: nanEpoch 223/800, Training Loss: nan, Validation Loss: nanEpoch 224/800, Training Loss: nan, Validation Loss: nanEpoch 225/800, Training Loss: nan, Validation Loss: nanEpoch 226/800, Training Loss: nan, Validation Loss: nanEpoch 227/800, Training Loss: nan, Validation Loss: nanEpoch 228/800, Training Loss: nan, Validation Loss: nanEpoch 229/800, Training Loss: nan, Validation Loss: nanEpoch 230/800, Training Loss: nan, Validation Loss: nanEpoch 231/800, Training Loss: nan, Validation Loss: nanEpoch 232/800, Training Loss: nan, Validation Loss: nanEpoch 233/800, Training Loss: nan, Validation Loss: nanEpoch 234/800, Training Loss: nan, Validation Loss: nanEpoch 235/800, Training Loss: nan, Validation Loss: nanEpoch 236/800, Training Loss: nan, Validation Loss: nanEpoch 237/800, Training Loss: nan, Validation Loss: nanEpoch 238/800, Training Loss: nan, Validation Loss: nanEpoch 239/800, Training Loss: nan, Validation Loss: nanEpoch 240/800, Training Loss: nan, Validation Loss: nanEpoch 241/800, Training Loss: nan, Validation Loss: nanEpoch 242/800, Training Loss: nan, Validation Loss: nanEpoch 243/800, Training Loss: nan, Validation Loss: nanEpoch 244/800, Training Loss: nan, Validation Loss: nanEpoch 245/800, Training Loss: nan, Validation Loss: nanEpoch 246/800, Training Loss: nan, Validation Loss: nanEpoch 247/800, Training Loss: nan, Validation Loss: nanEpoch 248/800, Training Loss: nan, Validation Loss: nanEpoch 249/800, Training Loss: nan, Validation Loss: nanEpoch 250/800, Training Loss: nan, Validation Loss: nanEpoch 251/800, Training Loss: nan, Validation Loss: nanEpoch 252/800, Training Loss: nan, Validation Loss: nanEpoch 253/800, Training Loss: nan, Validation Loss: nanEpoch 254/800, Training Loss: nan, Validation Loss: nanEpoch 255/800, Training Loss: nan, Validation Loss: nanEpoch 256/800, Training Loss: nan, Validation Loss: nanEpoch 257/800, Training Loss: nan, Validation Loss: nanEpoch 258/800, Training Loss: nan, Validation Loss: nanEpoch 259/800, Training Loss: nan, Validation Loss: nanEpoch 260/800, Training Loss: nan, Validation Loss: nanEpoch 261/800, Training Loss: nan, Validation Loss: nanEpoch 262/800, Training Loss: nan, Validation Loss: nanEpoch 263/800, Training Loss: nan, Validation Loss: nanEpoch 264/800, Training Loss: nan, Validation Loss: nanEpoch 265/800, Training Loss: nan, Validation Loss: nanEpoch 266/800, Training Loss: nan, Validation Loss: nanEpoch 267/800, Training Loss: nan, Validation Loss: nanEpoch 268/800, Training Loss: nan, Validation Loss: nanEpoch 269/800, Training Loss: nan, Validation Loss: nanEpoch 270/800, Training Loss: nan, Validation Loss: nanEpoch 271/800, Training Loss: nan, Validation Loss: nanEpoch 272/800, Training Loss: nan, Validation Loss: nanEpoch 273/800, Training Loss: nan, Validation Loss: nanEpoch 274/800, Training Loss: nan, Validation Loss: nanEpoch 275/800, Training Loss: nan, Validation Loss: nanEpoch 276/800, Training Loss: nan, Validation Loss: nanEpoch 277/800, Training Loss: nan, Validation Loss: nanEpoch 278/800, Training Loss: nan, Validation Loss: nanEpoch 279/800, Training Loss: nan, Validation Loss: nanEpoch 280/800, Training Loss: nan, Validation Loss: nanEpoch 281/800, Training Loss: nan, Validation Loss: nanEpoch 282/800, Training Loss: nan, Validation Loss: nanEpoch 283/800, Training Loss: nan, Validation Loss: nanEpoch 284/800, Training Loss: nan, Validation Loss: nanEpoch 285/800, Training Loss: nan, Validation Loss: nanEpoch 286/800, Training Loss: nan, Validation Loss: nanEpoch 287/800, Training Loss: nan, Validation Loss: nanEpoch 288/800, Training Loss: nan, Validation Loss: nanEpoch 289/800, Training Loss: nan, Validation Loss: nanEpoch 290/800, Training Loss: nan, Validation Loss: nanEpoch 291/800, Training Loss: nan, Validation Loss: nanEpoch 292/800, Training Loss: nan, Validation Loss: nanEpoch 293/800, Training Loss: nan, Validation Loss: nanEpoch 294/800, Training Loss: nan, Validation Loss: nanEpoch 295/800, Training Loss: nan, Validation Loss: nanEpoch 296/800, Training Loss: nan, Validation Loss: nanEpoch 297/800, Training Loss: nan, Validation Loss: nanEpoch 298/800, Training Loss: nan, Validation Loss: nanEpoch 299/800, Training Loss: nan, Validation Loss: nanEpoch 300/800, Training Loss: nan, Validation Loss: nanEpoch 301/800, Training Loss: nan, Validation Loss: nanEpoch 302/800, Training Loss: nan, Validation Loss: nanEpoch 303/800, Training Loss: nan, Validation Loss: nanEpoch 304/800, Training Loss: nan, Validation Loss: nanEpoch 305/800, Training Loss: nan, Validation Loss: nanEpoch 306/800, Training Loss: nan, Validation Loss: nanEpoch 307/800, Training Loss: nan, Validation Loss: nanEpoch 308/800, Training Loss: nan, Validation Loss: nanEpoch 309/800, Training Loss: nan, Validation Loss: nanEpoch 310/800, Training Loss: nan, Validation Loss: nanEpoch 311/800, Training Loss: nan, Validation Loss: nanEpoch 312/800, Training Loss: nan, Validation Loss: nanEpoch 313/800, Training Loss: nan, Validation Loss: nanEpoch 314/800, Training Loss: nan, Validation Loss: nanEpoch 315/800, Training Loss: nan, Validation Loss: nanEpoch 316/800, Training Loss: nan, Validation Loss: nanEpoch 317/800, Training Loss: nan, Validation Loss: nanEpoch 318/800, Training Loss: nan, Validation Loss: nanEpoch 319/800, Training Loss: nan, Validation Loss: nanEpoch 320/800, Training Loss: nan, Validation Loss: nanEpoch 321/800, Training Loss: nan, Validation Loss: nanEpoch 322/800, Training Loss: nan, Validation Loss: nanEpoch 323/800, Training Loss: nan, Validation Loss: nanEpoch 324/800, Training Loss: nan, Validation Loss: nanEpoch 325/800, Training Loss: nan, Validation Loss: nanEpoch 326/800, Training Loss: nan, Validation Loss: nanEpoch 327/800, Training Loss: nan, Validation Loss: nanEpoch 328/800, Training Loss: nan, Validation Loss: nanEpoch 329/800, Training Loss: nan, Validation Loss: nanEpoch 330/800, Training Loss: nan, Validation Loss: nanEpoch 331/800, Training Loss: nan, Validation Loss: nanEpoch 332/800, Training Loss: nan, Validation Loss: nanEpoch 333/800, Training Loss: nan, Validation Loss: nanEpoch 334/800, Training Loss: nan, Validation Loss: nanEpoch 335/800, Training Loss: nan, Validation Loss: nanEpoch 336/800, Training Loss: nan, Validation Loss: nanEpoch 337/800, Training Loss: nan, Validation Loss: nanEpoch 338/800, Training Loss: nan, Validation Loss: nanEpoch 339/800, Training Loss: nan, Validation Loss: nanEpoch 340/800, Training Loss: nan, Validation Loss: nanEpoch 341/800, Training Loss: nan, Validation Loss: nanEpoch 342/800, Training Loss: nan, Validation Loss: nanEpoch 343/800, Training Loss: nan, Validation Loss: nanEpoch 344/800, Training Loss: nan, Validation Loss: nanEpoch 345/800, Training Loss: nan, Validation Loss: nanEpoch 346/800, Training Loss: nan, Validation Loss: nanEpoch 347/800, Training Loss: nan, Validation Loss: nanEpoch 348/800, Training Loss: nan, Validation Loss: nanEpoch 349/800, Training Loss: nan, Validation Loss: nanEpoch 350/800, Training Loss: nan, Validation Loss: nanEpoch 351/800, Training Loss: nan, Validation Loss: nanEpoch 352/800, Training Loss: nan, Validation Loss: nanEpoch 353/800, Training Loss: nan, Validation Loss: nanEpoch 354/800, Training Loss: nan, Validation Loss: nanEpoch 355/800, Training Loss: nan, Validation Loss: nanEpoch 356/800, Training Loss: nan, Validation Loss: nanEpoch 357/800, Training Loss: nan, Validation Loss: nanEpoch 358/800, Training Loss: nan, Validation Loss: nanEpoch 359/800, Training Loss: nan, Validation Loss: nanEpoch 360/800, Training Loss: nan, Validation Loss: nanEpoch 361/800, Training Loss: nan, Validation Loss: nanEpoch 362/800, Training Loss: nan, Validation Loss: nanEpoch 363/800, Training Loss: nan, Validation Loss: nanEpoch 364/800, Training Loss: nan, Validation Loss: nanEpoch 365/800, Training Loss: nan, Validation Loss: nanEpoch 366/800, Training Loss: nan, Validation Loss: nanEpoch 367/800, Training Loss: nan, Validation Loss: nanEpoch 368/800, Training Loss: nan, Validation Loss: nanEpoch 369/800, Training Loss: nan, Validation Loss: nanEpoch 370/800, Training Loss: nan, Validation Loss: nanEpoch 371/800, Training Loss: nan, Validation Loss: nanEpoch 372/800, Training Loss: nan, Validation Loss: nanEpoch 373/800, Training Loss: nan, Validation Loss: nanEpoch 374/800, Training Loss: nan, Validation Loss: nanEpoch 375/800, Training Loss: nan, Validation Loss: nanEpoch 376/800, Training Loss: nan, Validation Loss: nanEpoch 377/800, Training Loss: nan, Validation Loss: nanEpoch 378/800, Training Loss: nan, Validation Loss: nanEpoch 379/800, Training Loss: nan, Validation Loss: nanEpoch 380/800, Training Loss: nan, Validation Loss: nanEpoch 381/800, Training Loss: nan, Validation Loss: nanEpoch 382/800, Training Loss: nan, Validation Loss: nanEpoch 383/800, Training Loss: nan, Validation Loss: nanEpoch 384/800, Training Loss: nan, Validation Loss: nanEpoch 385/800, Training Loss: nan, Validation Loss: nanEpoch 386/800, Training Loss: nan, Validation Loss: nanEpoch 387/800, Training Loss: nan, Validation Loss: nanEpoch 388/800, Training Loss: nan, Validation Loss: nanEpoch 389/800, Training Loss: nan, Validation Loss: nanEpoch 390/800, Training Loss: nan, Validation Loss: nanEpoch 391/800, Training Loss: nan, Validation Loss: nanEpoch 392/800, Training Loss: nan, Validation Loss: nanEpoch 393/800, Training Loss: nan, Validation Loss: nanEpoch 394/800, Training Loss: nan, Validation Loss: nanEpoch 395/800, Training Loss: nan, Validation Loss: nanEpoch 396/800, Training Loss: nan, Validation Loss: nanEpoch 397/800, Training Loss: nan, Validation Loss: nanEpoch 398/800, Training Loss: nan, Validation Loss: nanEpoch 399/800, Training Loss: nan, Validation Loss: nanEpoch 400/800, Training Loss: nan, Validation Loss: nanEpoch 401/800, Training Loss: nan, Validation Loss: nanEpoch 402/800, Training Loss: nan, Validation Loss: nanEpoch 403/800, Training Loss: nan, Validation Loss: nanEpoch 404/800, Training Loss: nan, Validation Loss: nanEpoch 405/800, Training Loss: nan, Validation Loss: nanEpoch 406/800, Training Loss: nan, Validation Loss: nanEpoch 407/800, Training Loss: nan, Validation Loss: nanEpoch 408/800, Training Loss: nan, Validation Loss: nanEpoch 409/800, Training Loss: nan, Validation Loss: nanEpoch 410/800, Training Loss: nan, Validation Loss: nanEpoch 411/800, Training Loss: nan, Validation Loss: nanEpoch 412/800, Training Loss: nan, Validation Loss: nanEpoch 413/800, Training Loss: nan, Validation Loss: nanEpoch 414/800, Training Loss: nan, Validation Loss: nanEpoch 415/800, Training Loss: nan, Validation Loss: nanEpoch 416/800, Training Loss: nan, Validation Loss: nanEpoch 417/800, Training Loss: nan, Validation Loss: nanEpoch 418/800, Training Loss: nan, Validation Loss: nanEpoch 419/800, Training Loss: nan, Validation Loss: nanEpoch 420/800, Training Loss: nan, Validation Loss: nanEpoch 421/800, Training Loss: nan, Validation Loss: nanEpoch 422/800, Training Loss: nan, Validation Loss: nanEpoch 423/800, Training Loss: nan, Validation Loss: nanEpoch 424/800, Training Loss: nan, Validation Loss: nanEpoch 425/800, Training Loss: nan, Validation Loss: nanEpoch 426/800, Training Loss: nan, Validation Loss: nanEpoch 427/800, Training Loss: nan, Validation Loss: nanEpoch 428/800, Training Loss: nan, Validation Loss: nanEpoch 429/800, Training Loss: nan, Validation Loss: nanEpoch 430/800, Training Loss: nan, Validation Loss: nanEpoch 431/800, Training Loss: nan, Validation Loss: nanEpoch 432/800, Training Loss: nan, Validation Loss: nanEpoch 433/800, Training Loss: nan, Validation Loss: nanEpoch 434/800, Training Loss: nan, Validation Loss: nanEpoch 435/800, Training Loss: nan, Validation Loss: nanEpoch 436/800, Training Loss: nan, Validation Loss: nanEpoch 437/800, Training Loss: nan, Validation Loss: nanEpoch 438/800, Training Loss: nan, Validation Loss: nanEpoch 439/800, Training Loss: nan, Validation Loss: nanEpoch 440/800, Training Loss: nan, Validation Loss: nanEpoch 441/800, Training Loss: nan, Validation Loss: nanEpoch 442/800, Training Loss: nan, Validation Loss: nanEpoch 443/800, Training Loss: nan, Validation Loss: nanEpoch 444/800, Training Loss: nan, Validation Loss: nanEpoch 445/800, Training Loss: nan, Validation Loss: nanEpoch 446/800, Training Loss: nan, Validation Loss: nanEpoch 447/800, Training Loss: nan, Validation Loss: nanEpoch 448/800, Training Loss: nan, Validation Loss: nanEpoch 449/800, Training Loss: nan, Validation Loss: nanEpoch 450/800, Training Loss: nan, Validation Loss: nanEpoch 451/800, Training Loss: nan, Validation Loss: nanEpoch 452/800, Training Loss: nan, Validation Loss: nanEpoch 453/800, Training Loss: nan, Validation Loss: nanEpoch 454/800, Training Loss: nan, Validation Loss: nanEpoch 455/800, Training Loss: nan, Validation Loss: nanEpoch 456/800, Training Loss: nan, Validation Loss: nanEpoch 457/800, Training Loss: nan, Validation Loss: nanEpoch 458/800, Training Loss: nan, Validation Loss: nanEpoch 459/800, Training Loss: nan, Validation Loss: nanEpoch 460/800, Training Loss: nan, Validation Loss: nanEpoch 461/800, Training Loss: nan, Validation Loss: nanEpoch 462/800, Training Loss: nan, Validation Loss: nanEpoch 463/800, Training Loss: nan, Validation Loss: nanEpoch 464/800, Training Loss: nan, Validation Loss: nanEpoch 465/800, Training Loss: nan, Validation Loss: nanEpoch 466/800, Training Loss: nan, Validation Loss: nanEpoch 467/800, Training Loss: nan, Validation Loss: nanEpoch 468/800, Training Loss: nan, Validation Loss: nanEpoch 469/800, Training Loss: nan, Validation Loss: nanEpoch 470/800, Training Loss: nan, Validation Loss: nanEpoch 471/800, Training Loss: nan, Validation Loss: nanEpoch 472/800, Training Loss: nan, Validation Loss: nanEpoch 473/800, Training Loss: nan, Validation Loss: nanEpoch 474/800, Training Loss: nan, Validation Loss: nanEpoch 475/800, Training Loss: nan, Validation Loss: nanEpoch 476/800, Training Loss: nan, Validation Loss: nanEpoch 477/800, Training Loss: nan, Validation Loss: nanEpoch 478/800, Training Loss: nan, Validation Loss: nanEpoch 479/800, Training Loss: nan, Validation Loss: nanEpoch 480/800, Training Loss: nan, Validation Loss: nanEpoch 481/800, Training Loss: nan, Validation Loss: nanEpoch 482/800, Training Loss: nan, Validation Loss: nanEpoch 483/800, Training Loss: nan, Validation Loss: nanEpoch 484/800, Training Loss: nan, Validation Loss: nanEpoch 485/800, Training Loss: nan, Validation Loss: nanEpoch 486/800, Training Loss: nan, Validation Loss: nanEpoch 487/800, Training Loss: nan, Validation Loss: nanEpoch 488/800, Training Loss: nan, Validation Loss: nanEpoch 489/800, Training Loss: nan, Validation Loss: nanEpoch 490/800, Training Loss: nan, Validation Loss: nanEpoch 491/800, Training Loss: nan, Validation Loss: nanEpoch 492/800, Training Loss: nan, Validation Loss: nanEpoch 493/800, Training Loss: nan, Validation Loss: nanEpoch 494/800, Training Loss: nan, Validation Loss: nanEpoch 495/800, Training Loss: nan, Validation Loss: nanEpoch 496/800, Training Loss: nan, Validation Loss: nanEpoch 497/800, Training Loss: nan, Validation Loss: nanEpoch 498/800, Training Loss: nan, Validation Loss: nanEpoch 499/800, Training Loss: nan, Validation Loss: nanEpoch 500/800, Training Loss: nan, Validation Loss: nanEpoch 501/800, Training Loss: nan, Validation Loss: nanEpoch 502/800, Training Loss: nan, Validation Loss: nanEpoch 503/800, Training Loss: nan, Validation Loss: nanEpoch 504/800, Training Loss: nan, Validation Loss: nanEpoch 505/800, Training Loss: nan, Validation Loss: nanEpoch 506/800, Training Loss: nan, Validation Loss: nanEpoch 507/800, Training Loss: nan, Validation Loss: nanEpoch 508/800, Training Loss: nan, Validation Loss: nanEpoch 509/800, Training Loss: nan, Validation Loss: nanEpoch 510/800, Training Loss: nan, Validation Loss: nanEpoch 511/800, Training Loss: nan, Validation Loss: nanEpoch 512/800, Training Loss: nan, Validation Loss: nanEpoch 513/800, Training Loss: nan, Validation Loss: nanEpoch 514/800, Training Loss: nan, Validation Loss: nanEpoch 515/800, Training Loss: nan, Validation Loss: nanEpoch 516/800, Training Loss: nan, Validation Loss: nanEpoch 517/800, Training Loss: nan, Validation Loss: nanEpoch 518/800, Training Loss: nan, Validation Loss: nanEpoch 519/800, Training Loss: nan, Validation Loss: nanEpoch 520/800, Training Loss: nan, Validation Loss: nanEpoch 521/800, Training Loss: nan, Validation Loss: nanEpoch 522/800, Training Loss: nan, Validation Loss: nanEpoch 523/800, Training Loss: nan, Validation Loss: nanEpoch 524/800, Training Loss: nan, Validation Loss: nanEpoch 525/800, Training Loss: nan, Validation Loss: nanEpoch 526/800, Training Loss: nan, Validation Loss: nanEpoch 527/800, Training Loss: nan, Validation Loss: nanEpoch 528/800, Training Loss: nan, Validation Loss: nanEpoch 529/800, Training Loss: nan, Validation Loss: nanEpoch 530/800, Training Loss: nan, Validation Loss: nanEpoch 531/800, Training Loss: nan, Validation Loss: nanEpoch 532/800, Training Loss: nan, Validation Loss: nanEpoch 533/800, Training Loss: nan, Validation Loss: nanEpoch 534/800, Training Loss: nan, Validation Loss: nanEpoch 535/800, Training Loss: nan, Validation Loss: nanEpoch 536/800, Training Loss: nan, Validation Loss: nanEpoch 537/800, Training Loss: nan, Validation Loss: nanEpoch 538/800, Training Loss: nan, Validation Loss: nanEpoch 539/800, Training Loss: nan, Validation Loss: nanEpoch 540/800, Training Loss: nan, Validation Loss: nanEpoch 541/800, Training Loss: nan, Validation Loss: nanEpoch 542/800, Training Loss: nan, Validation Loss: nanEpoch 543/800, Training Loss: nan, Validation Loss: nanEpoch 544/800, Training Loss: nan, Validation Loss: nanEpoch 545/800, Training Loss: nan, Validation Loss: nanEpoch 546/800, Training Loss: nan, Validation Loss: nanEpoch 547/800, Training Loss: nan, Validation Loss: nanEpoch 548/800, Training Loss: nan, Validation Loss: nanEpoch 549/800, Training Loss: nan, Validation Loss: nanEpoch 550/800, Training Loss: nan, Validation Loss: nanEpoch 551/800, Training Loss: nan, Validation Loss: nanEpoch 552/800, Training Loss: nan, Validation Loss: nanEpoch 553/800, Training Loss: nan, Validation Loss: nanEpoch 554/800, Training Loss: nan, Validation Loss: nanEpoch 555/800, Training Loss: nan, Validation Loss: nanEpoch 556/800, Training Loss: nan, Validation Loss: nanEpoch 557/800, Training Loss: nan, Validation Loss: nanEpoch 558/800, Training Loss: nan, Validation Loss: nanEpoch 559/800, Training Loss: nan, Validation Loss: nanEpoch 560/800, Training Loss: nan, Validation Loss: nanEpoch 561/800, Training Loss: nan, Validation Loss: nanEpoch 562/800, Training Loss: nan, Validation Loss: nanEpoch 563/800, Training Loss: nan, Validation Loss: nanEpoch 564/800, Training Loss: nan, Validation Loss: nanEpoch 565/800, Training Loss: nan, Validation Loss: nanEpoch 566/800, Training Loss: nan, Validation Loss: nanEpoch 567/800, Training Loss: nan, Validation Loss: nanEpoch 568/800, Training Loss: nan, Validation Loss: nanEpoch 569/800, Training Loss: nan, Validation Loss: nanEpoch 570/800, Training Loss: nan, Validation Loss: nanEpoch 571/800, Training Loss: nan, Validation Loss: nanEpoch 572/800, Training Loss: nan, Validation Loss: nanEpoch 573/800, Training Loss: nan, Validation Loss: nanEpoch 574/800, Training Loss: nan, Validation Loss: nanEpoch 575/800, Training Loss: nan, Validation Loss: nanEpoch 576/800, Training Loss: nan, Validation Loss: nanEpoch 577/800, Training Loss: nan, Validation Loss: nanEpoch 578/800, Training Loss: nan, Validation Loss: nanEpoch 579/800, Training Loss: nan, Validation Loss: nanEpoch 580/800, Training Loss: nan, Validation Loss: nanEpoch 581/800, Training Loss: nan, Validation Loss: nanEpoch 582/800, Training Loss: nan, Validation Loss: nanEpoch 583/800, Training Loss: nan, Validation Loss: nanEpoch 584/800, Training Loss: nan, Validation Loss: nanEpoch 585/800, Training Loss: nan, Validation Loss: nanEpoch 586/800, Training Loss: nan, Validation Loss: nanEpoch 587/800, Training Loss: nan, Validation Loss: nanEpoch 588/800, Training Loss: nan, Validation Loss: nanEpoch 589/800, Training Loss: nan, Validation Loss: nanEpoch 590/800, Training Loss: nan, Validation Loss: nanEpoch 591/800, Training Loss: nan, Validation Loss: nanEpoch 592/800, Training Loss: nan, Validation Loss: nanEpoch 593/800, Training Loss: nan, Validation Loss: nanEpoch 594/800, Training Loss: nan, Validation Loss: nanEpoch 595/800, Training Loss: nan, Validation Loss: nanEpoch 596/800, Training Loss: nan, Validation Loss: nanEpoch 597/800, Training Loss: nan, Validation Loss: nanEpoch 598/800, Training Loss: nan, Validation Loss: nanEpoch 599/800, Training Loss: nan, Validation Loss: nanEpoch 600/800, Training Loss: nan, Validation Loss: nanEpoch 601/800, Training Loss: nan, Validation Loss: nanEpoch 602/800, Training Loss: nan, Validation Loss: nanEpoch 603/800, Training Loss: nan, Validation Loss: nanEpoch 604/800, Training Loss: nan, Validation Loss: nanEpoch 605/800, Training Loss: nan, Validation Loss: nanEpoch 606/800, Training Loss: nan, Validation Loss: nanEpoch 607/800, Training Loss: nan, Validation Loss: nanEpoch 608/800, Training Loss: nan, Validation Loss: nanEpoch 609/800, Training Loss: nan, Validation Loss: nanEpoch 610/800, Training Loss: nan, Validation Loss: nanEpoch 611/800, Training Loss: nan, Validation Loss: nanEpoch 612/800, Training Loss: nan, Validation Loss: nanEpoch 613/800, Training Loss: nan, Validation Loss: nanEpoch 614/800, Training Loss: nan, Validation Loss: nanEpoch 615/800, Training Loss: nan, Validation Loss: nanEpoch 616/800, Training Loss: nan, Validation Loss: nanEpoch 617/800, Training Loss: nan, Validation Loss: nanEpoch 618/800, Training Loss: nan, Validation Loss: nanEpoch 619/800, Training Loss: nan, Validation Loss: nanEpoch 620/800, Training Loss: nan, Validation Loss: nanEpoch 621/800, Training Loss: nan, Validation Loss: nanEpoch 622/800, Training Loss: nan, Validation Loss: nanEpoch 623/800, Training Loss: nan, Validation Loss: nanEpoch 624/800, Training Loss: nan, Validation Loss: nanEpoch 625/800, Training Loss: nan, Validation Loss: nanEpoch 626/800, Training Loss: nan, Validation Loss: nanEpoch 627/800, Training Loss: nan, Validation Loss: nanEpoch 628/800, Training Loss: nan, Validation Loss: nanEpoch 629/800, Training Loss: nan, Validation Loss: nanEpoch 630/800, Training Loss: nan, Validation Loss: nanEpoch 631/800, Training Loss: nan, Validation Loss: nanEpoch 632/800, Training Loss: nan, Validation Loss: nanEpoch 633/800, Training Loss: nan, Validation Loss: nanEpoch 634/800, Training Loss: nan, Validation Loss: nanEpoch 635/800, Training Loss: nan, Validation Loss: nanEpoch 636/800, Training Loss: nan, Validation Loss: nanEpoch 637/800, Training Loss: nan, Validation Loss: nanEpoch 638/800, Training Loss: nan, Validation Loss: nanEpoch 639/800, Training Loss: nan, Validation Loss: nanEpoch 640/800, Training Loss: nan, Validation Loss: nanEpoch 641/800, Training Loss: nan, Validation Loss: nanEpoch 642/800, Training Loss: nan, Validation Loss: nanEpoch 643/800, Training Loss: nan, Validation Loss: nanEpoch 644/800, Training Loss: nan, Validation Loss: nanEpoch 645/800, Training Loss: nan, Validation Loss: nanEpoch 646/800, Training Loss: nan, Validation Loss: nanEpoch 647/800, Training Loss: nan, Validation Loss: nanEpoch 648/800, Training Loss: nan, Validation Loss: nanEpoch 649/800, Training Loss: nan, Validation Loss: nanEpoch 650/800, Training Loss: nan, Validation Loss: nanEpoch 651/800, Training Loss: nan, Validation Loss: nanEpoch 652/800, Training Loss: nan, Validation Loss: nanEpoch 653/800, Training Loss: nan, Validation Loss: nanEpoch 654/800, Training Loss: nan, Validation Loss: nanEpoch 655/800, Training Loss: nan, Validation Loss: nanEpoch 656/800, Training Loss: nan, Validation Loss: nanEpoch 657/800, Training Loss: nan, Validation Loss: nanEpoch 658/800, Training Loss: nan, Validation Loss: nanEpoch 659/800, Training Loss: nan, Validation Loss: nanEpoch 660/800, Training Loss: nan, Validation Loss: nanEpoch 661/800, Training Loss: nan, Validation Loss: nanEpoch 662/800, Training Loss: nan, Validation Loss: nanEpoch 663/800, Training Loss: nan, Validation Loss: nanEpoch 664/800, Training Loss: nan, Validation Loss: nanEpoch 665/800, Training Loss: nan, Validation Loss: nanEpoch 666/800, Training Loss: nan, Validation Loss: nanEpoch 667/800, Training Loss: nan, Validation Loss: nanEpoch 668/800, Training Loss: nan, Validation Loss: nanEpoch 669/800, Training Loss: nan, Validation Loss: nanEpoch 670/800, Training Loss: nan, Validation Loss: nanEpoch 671/800, Training Loss: nan, Validation Loss: nanEpoch 672/800, Training Loss: nan, Validation Loss: nanEpoch 673/800, Training Loss: nan, Validation Loss: nanEpoch 674/800, Training Loss: nan, Validation Loss: nanEpoch 675/800, Training Loss: nan, Validation Loss: nanEpoch 676/800, Training Loss: nan, Validation Loss: nanEpoch 677/800, Training Loss: nan, Validation Loss: nanEpoch 678/800, Training Loss: nan, Validation Loss: nanEpoch 679/800, Training Loss: nan, Validation Loss: nanEpoch 680/800, Training Loss: nan, Validation Loss: nanEpoch 681/800, Training Loss: nan, Validation Loss: nanEpoch 682/800, Training Loss: nan, Validation Loss: nanEpoch 683/800, Training Loss: nan, Validation Loss: nanEpoch 684/800, Training Loss: nan, Validation Loss: nanEpoch 685/800, Training Loss: nan, Validation Loss: nanEpoch 686/800, Training Loss: nan, Validation Loss: nanEpoch 687/800, Training Loss: nan, Validation Loss: nanEpoch 688/800, Training Loss: nan, Validation Loss: nanEpoch 689/800, Training Loss: nan, Validation Loss: nanEpoch 690/800, Training Loss: nan, Validation Loss: nanEpoch 691/800, Training Loss: nan, Validation Loss: nanEpoch 692/800, Training Loss: nan, Validation Loss: nanEpoch 693/800, Training Loss: nan, Validation Loss: nanEpoch 694/800, Training Loss: nan, Validation Loss: nanEpoch 695/800, Training Loss: nan, Validation Loss: nanEpoch 696/800, Training Loss: nan, Validation Loss: nanEpoch 697/800, Training Loss: nan, Validation Loss: nanEpoch 698/800, Training Loss: nan, Validation Loss: nanEpoch 699/800, Training Loss: nan, Validation Loss: nanEpoch 700/800, Training Loss: nan, Validation Loss: nanEpoch 701/800, Training Loss: nan, Validation Loss: nanEpoch 702/800, Training Loss: nan, Validation Loss: nanEpoch 703/800, Training Loss: nan, Validation Loss: nanEpoch 704/800, Training Loss: nan, Validation Loss: nanEpoch 705/800, Training Loss: nan, Validation Loss: nanEpoch 706/800, Training Loss: nan, Validation Loss: nanEpoch 707/800, Training Loss: nan, Validation Loss: nanEpoch 708/800, Training Loss: nan, Validation Loss: nanEpoch 709/800, Training Loss: nan, Validation Loss: nanEpoch 710/800, Training Loss: nan, Validation Loss: nanEpoch 711/800, Training Loss: nan, Validation Loss: nanEpoch 712/800, Training Loss: nan, Validation Loss: nanEpoch 713/800, Training Loss: nan, Validation Loss: nanEpoch 714/800, Training Loss: nan, Validation Loss: nanEpoch 715/800, Training Loss: nan, Validation Loss: nanEpoch 716/800, Training Loss: nan, Validation Loss: nanEpoch 717/800, Training Loss: nan, Validation Loss: nanEpoch 718/800, Training Loss: nan, Validation Loss: nanEpoch 719/800, Training Loss: nan, Validation Loss: nanEpoch 720/800, Training Loss: nan, Validation Loss: nanEpoch 721/800, Training Loss: nan, Validation Loss: nanEpoch 722/800, Training Loss: nan, Validation Loss: nanEpoch 723/800, Training Loss: nan, Validation Loss: nanEpoch 724/800, Training Loss: nan, Validation Loss: nanEpoch 725/800, Training Loss: nan, Validation Loss: nanEpoch 726/800, Training Loss: nan, Validation Loss: nanEpoch 727/800, Training Loss: nan, Validation Loss: nanEpoch 728/800, Training Loss: nan, Validation Loss: nanEpoch 729/800, Training Loss: nan, Validation Loss: nanEpoch 730/800, Training Loss: nan, Validation Loss: nanEpoch 731/800, Training Loss: nan, Validation Loss: nanEpoch 732/800, Training Loss: nan, Validation Loss: nanEpoch 733/800, Training Loss: nan, Validation Loss: nanEpoch 734/800, Training Loss: nan, Validation Loss: nanEpoch 735/800, Training Loss: nan, Validation Loss: nanEpoch 736/800, Training Loss: nan, Validation Loss: nanEpoch 737/800, Training Loss: nan, Validation Loss: nanEpoch 738/800, Training Loss: nan, Validation Loss: nanEpoch 739/800, Training Loss: nan, Validation Loss: nanEpoch 740/800, Training Loss: nan, Validation Loss: nanEpoch 741/800, Training Loss: nan, Validation Loss: nanEpoch 742/800, Training Loss: nan, Validation Loss: nanEpoch 743/800, Training Loss: nan, Validation Loss: nanEpoch 744/800, Training Loss: nan, Validation Loss: nanEpoch 745/800, Training Loss: nan, Validation Loss: nanEpoch 746/800, Training Loss: nan, Validation Loss: nanEpoch 747/800, Training Loss: nan, Validation Loss: nanEpoch 748/800, Training Loss: nan, Validation Loss: nanEpoch 749/800, Training Loss: nan, Validation Loss: nanEpoch 750/800, Training Loss: nan, Validation Loss: nanEpoch 751/800, Training Loss: nan, Validation Loss: nanEpoch 752/800, Training Loss: nan, Validation Loss: nanEpoch 753/800, Training Loss: nan, Validation Loss: nanEpoch 754/800, Training Loss: nan, Validation Loss: nanEpoch 755/800, Training Loss: nan, Validation Loss: nanEpoch 756/800, Training Loss: nan, Validation Loss: nanEpoch 757/800, Training Loss: nan, Validation Loss: nanEpoch 758/800, Training Loss: nan, Validation Loss: nanEpoch 759/800, Training Loss: nan, Validation Loss: nanEpoch 760/800, Training Loss: nan, Validation Loss: nanEpoch 761/800, Training Loss: nan, Validation Loss: nanEpoch 762/800, Training Loss: nan, Validation Loss: nanEpoch 763/800, Training Loss: nan, Validation Loss: nanEpoch 764/800, Training Loss: nan, Validation Loss: nanEpoch 765/800, Training Loss: nan, Validation Loss: nanEpoch 766/800, Training Loss: nan, Validation Loss: nanEpoch 767/800, Training Loss: nan, Validation Loss: nanEpoch 768/800, Training Loss: nan, Validation Loss: nanEpoch 769/800, Training Loss: nan, Validation Loss: nanEpoch 770/800, Training Loss: nan, Validation Loss: nanEpoch 771/800, Training Loss: nan, Validation Loss: nanEpoch 772/800, Training Loss: nan, Validation Loss: nanEpoch 773/800, Training Loss: nan, Validation Loss: nanEpoch 774/800, Training Loss: nan, Validation Loss: nanEpoch 775/800, Training Loss: nan, Validation Loss: nanEpoch 776/800, Training Loss: nan, Validation Loss: nanEpoch 777/800, Training Loss: nan, Validation Loss: nanEpoch 778/800, Training Loss: nan, Validation Loss: nanEpoch 779/800, Training Loss: nan, Validation Loss: nanEpoch 780/800, Training Loss: nan, Validation Loss: nanEpoch 781/800, Training Loss: nan, Validation Loss: nanEpoch 782/800, Training Loss: nan, Validation Loss: nanEpoch 783/800, Training Loss: nan, Validation Loss: nanEpoch 784/800, Training Loss: nan, Validation Loss: nanEpoch 785/800, Training Loss: nan, Validation Loss: nanEpoch 786/800, Training Loss: nan, Validation Loss: nanEpoch 787/800, Training Loss: nan, Validation Loss: nanEpoch 788/800, Training Loss: nan, Validation Loss: nanEpoch 789/800, Training Loss: nan, Validation Loss: nanEpoch 790/800, Training Loss: nan, Validation Loss: nanEpoch 791/800, Training Loss: nan, Validation Loss: nanEpoch 792/800, Training Loss: nan, Validation Loss: nanEpoch 793/800, Training Loss: nan, Validation Loss: nanEpoch 794/800, Training Loss: nan, Validation Loss: nanEpoch 795/800, Training Loss: nan, Validation Loss: nanEpoch 796/800, Training Loss: nan, Validation Loss: nanEpoch 797/800, Training Loss: nan, Validation Loss: nanEpoch 798/800, Training Loss: nan, Validation Loss: nanEpoch 799/800, Training Loss: nan, Validation Loss: nanEpoch 800/800, Training Loss: nan, Validation Loss: nanEpoch 1/100, Training Loss: 0.6957750744884337, Validation Loss: 0.6932650585883974Epoch 2/100, Training Loss: 0.6959974027392419, Validation Loss: 0.6933460844087062Epoch 3/100, Training Loss: 0.6960216653929183, Validation Loss: 0.6933509766856103Epoch 4/100, Training Loss: 0.6960237329435137, Validation Loss: 0.6933516591629143Epoch 5/100, Training Loss: 0.6960245584134757, Validation Loss: 0.6933520953258047Epoch 6/100, Training Loss: 0.6960252758567971, Validation Loss: 0.6933524951123441Epoch 7/100, Training Loss: 0.6960259474981231, Validation Loss: 0.693352871070416Epoch 8/100, Training Loss: 0.6960265791147295, Validation Loss: 0.6933532252861446Epoch 9/100, Training Loss: 0.6960271733540028, Validation Loss: 0.6933535591909082Epoch 10/100, Training Loss: 0.6960277325407689, Validation Loss: 0.6933538740890235Epoch 11/100, Training Loss: 0.6960282588375961, Validation Loss: 0.6933541711975764Epoch 12/100, Training Loss: 0.6960287542647355, Validation Loss: 0.6933544516587908Epoch 13/100, Training Loss: 0.6960292207111004, Validation Loss: 0.6933547165493639Epoch 14/100, Training Loss: 0.6960296599433835, Validation Loss: 0.6933549668883179Epoch 15/100, Training Loss: 0.6960300736138283, Validation Loss: 0.6933552036437007Epoch 16/100, Training Loss: 0.6960304632668125, Validation Loss: 0.6933554277383449Epoch 17/100, Training Loss: 0.6960308303443149, Validation Loss: 0.6933556400548483Epoch 18/100, Training Loss: 0.6960311761903241, Validation Loss: 0.6933558414399189Epoch 19/100, Training Loss: 0.6960315020541514, Validation Loss: 0.6933560327081776Epoch 20/100, Training Loss: 0.6960318090926187, Validation Loss: 0.6933562146454828Epoch 21/100, Training Loss: 0.6960320983709685, Validation Loss: 0.6933563880118151Epoch 22/100, Training Loss: 0.6960323708623323, Validation Loss: 0.6933565535437023Epoch 23/100, Training Loss: 0.6960326274454327, Validation Loss: 0.6933567119561224Epoch 24/100, Training Loss: 0.6960328689001536, Validation Loss: 0.6933568639437546Epoch 25/100, Training Loss: 0.6960330959003368, Validation Loss: 0.6933570101813531Epoch 26/100, Training Loss: 0.696033309003057, Validation Loss: 0.6933571513228727Epoch 27/100, Training Loss: 0.6960335086331901, Validation Loss: 0.6933572879987956Epoch 28/100, Training Loss: 0.6960336950617221, Validation Loss: 0.693357420810803Epoch 29/100, Training Loss: 0.6960338683755307, Validation Loss: 0.6933575503225002Epoch 30/100, Training Loss: 0.6960340284354516, Validation Loss: 0.6933576770442151Epoch 31/100, Training Loss: 0.6960341748180052, Validation Loss: 0.6933578014088664Epoch 32/100, Training Loss: 0.6960343067340777, Validation Loss: 0.6933579237342241Epoch 33/100, Training Loss: 0.696034422914597, Validation Loss: 0.693358044164307Epoch 34/100, Training Loss: 0.6960345214483376, Validation Loss: 0.693358162578391Epoch 35/100, Training Loss: 0.6960345995491435, Validation Loss: 0.6933582784492127Epoch 36/100, Training Loss: 0.6960346532174417, Validation Loss: 0.6933583906202906Epoch 37/100, Training Loss: 0.6960346767404972, Validation Loss: 0.6933584969524915Epoch 38/100, Training Loss: 0.6960346619418265, Validation Loss: 0.6933585937553077Epoch 39/100, Training Loss: 0.6960345970318549, Validation Loss: 0.6933586748563262Epoch 40/100, Training Loss: 0.6960344648091629, Validation Loss: 0.693358730048185Epoch 41/100, Training Loss: 0.696034239775281, Validation Loss: 0.693358742435418Epoch 42/100, Training Loss: 0.6960338833757401, Validation Loss: 0.6933586837766361Epoch 43/100, Training Loss: 0.6960333358963455, Validation Loss: 0.6933585060425187Epoch 44/100, Training Loss: 0.6960325021484302, Validation Loss: 0.6933581255320702Epoch 45/100, Training Loss: 0.6960312250820311, Validation Loss: 0.6933573916369559Epoch 46/100, Training Loss: 0.696029234646456, Validation Loss: 0.6933560220901409Epoch 47/100, Training Loss: 0.6960260425691913, Validation Loss: 0.6933534598970966Epoch 48/100, Training Loss: 0.6960207095309441, Validation Loss: 0.6933485314027001Epoch 49/100, Training Loss: 0.6960112812091721, Validation Loss: 0.6933385440665373Epoch 50/100, Training Loss: 0.6959932541806064, Validation Loss: 0.6933165788191152Epoch 51/100, Training Loss: 0.6959547022265024, Validation Loss: 0.6932618098128907Epoch 52/100, Training Loss: 0.6958569662256111, Validation Loss: 0.6930948671455653Epoch 53/100, Training Loss: 0.6955262399289873, Validation Loss: 0.6923664221132799Epoch 54/100, Training Loss: 0.6934526364765399, Validation Loss: 0.6851842262473632Epoch 55/100, Training Loss: 0.6283492537201928, Validation Loss: 0.3785186571274659Epoch 56/100, Training Loss: 0.4156024936060496, Validation Loss: 0.2642826583872727Epoch 57/100, Training Loss: 0.4287289220734807, Validation Loss: 0.3527597175187556Epoch 58/100, Training Loss: 0.48480309650309455, Validation Loss: 0.4333804962584664Epoch 59/100, Training Loss: 0.5595367280398602, Validation Loss: 0.5230823508051311Epoch 60/100, Training Loss: 0.6251071174879702, Validation Loss: 0.7193364825574964Epoch 61/100, Training Loss: 0.7229212460829476, Validation Loss: 0.7045940148250797Epoch 62/100, Training Loss: 0.9106034764551103, Validation Loss: 0.9190405785056909Epoch 63/100, Training Loss: 1.0457574823580333, Validation Loss: 0.8161686979152589Epoch 64/100, Training Loss: nan, Validation Loss: nanEpoch 65/100, Training Loss: nan, Validation Loss: nanEpoch 66/100, Training Loss: nan, Validation Loss: nanEpoch 67/100, Training Loss: nan, Validation Loss: nanEpoch 68/100, Training Loss: nan, Validation Loss: nanEpoch 69/100, Training Loss: nan, Validation Loss: nanEpoch 70/100, Training Loss: nan, Validation Loss: nanEpoch 71/100, Training Loss: nan, Validation Loss: nanEpoch 72/100, Training Loss: nan, Validation Loss: nanEpoch 73/100, Training Loss: nan, Validation Loss: nanEpoch 74/100, Training Loss: nan, Validation Loss: nanEpoch 75/100, Training Loss: nan, Validation Loss: nanEpoch 76/100, Training Loss: nan, Validation Loss: nanEpoch 77/100, Training Loss: nan, Validation Loss: nanEpoch 78/100, Training Loss: nan, Validation Loss: nanEpoch 79/100, Training Loss: nan, Validation Loss: nanEpoch 80/100, Training Loss: nan, Validation Loss: nanEpoch 81/100, Training Loss: nan, Validation Loss: nanEpoch 82/100, Training Loss: nan, Validation Loss: nanEpoch 83/100, Training Loss: nan, Validation Loss: nanEpoch 84/100, Training Loss: nan, Validation Loss: nanEpoch 85/100, Training Loss: nan, Validation Loss: nanEpoch 86/100, Training Loss: nan, Validation Loss: nanEpoch 87/100, Training Loss: nan, Validation Loss: nanEpoch 88/100, Training Loss: nan, Validation Loss: nanEpoch 89/100, Training Loss: nan, Validation Loss: nanEpoch 90/100, Training Loss: nan, Validation Loss: nanEpoch 91/100, Training Loss: nan, Validation Loss: nanEpoch 92/100, Training Loss: nan, Validation Loss: nanEpoch 93/100, Training Loss: nan, Validation Loss: nanEpoch 94/100, Training Loss: nan, Validation Loss: nanEpoch 95/100, Training Loss: nan, Validation Loss: nanEpoch 96/100, Training Loss: nan, Validation Loss: nanEpoch 97/100, Training Loss: nan, Validation Loss: nanEpoch 98/100, Training Loss: nan, Validation Loss: nanEpoch 99/100, Training Loss: nan, Validation Loss: nanEpoch 100/100, Training Loss: nan, Validation Loss: nanEpoch 1/300, Training Loss: 0.6957907363125828, Validation Loss: 0.6932631350664018Epoch 2/300, Training Loss: 0.6960099139738506, Validation Loss: 0.6933530085865531Epoch 3/300, Training Loss: 0.6960357412040385, Validation Loss: 0.6933577852786306Epoch 4/300, Training Loss: 0.6960368668984892, Validation Loss: 0.6933579576274453Epoch 5/300, Training Loss: 0.6960367013654196, Validation Loss: 0.6933578984774824Epoch 6/300, Training Loss: 0.6960364857619, Validation Loss: 0.6933578326238312Epoch 7/300, Training Loss: 0.696036281323831, Validation Loss: 0.6933577706974392Epoch 8/300, Training Loss: 0.6960360887598375, Validation Loss: 0.6933577123962291Epoch 9/300, Training Loss: 0.6960359059621981, Validation Loss: 0.6933576569394978Epoch 10/300, Training Loss: 0.6960357309321892, Validation Loss: 0.693357603590089Epoch 11/300, Training Loss: 0.6960355618892579, Validation Loss: 0.6933575516668256Epoch 12/300, Training Loss: 0.6960353972408535, Validation Loss: 0.6933575005322521Epoch 13/300, Training Loss: 0.6960352355496886, Validation Loss: 0.6933574495802232Epoch 14/300, Training Loss: 0.6960350755045392, Validation Loss: 0.6933573982241871Epoch 15/300, Training Loss: 0.696034915894091, Validation Loss: 0.6933573458858218Epoch 16/300, Training Loss: 0.6960347555830912, Validation Loss: 0.6933572919836548Epoch 17/300, Training Loss: 0.6960345934901563, Validation Loss: 0.6933572359212825Epoch 18/300, Training Loss: 0.6960344285666082, Validation Loss: 0.69335717707477Epoch 19/300, Training Loss: 0.6960342597757322, Validation Loss: 0.6933571147787813Epoch 20/300, Training Loss: 0.6960340860718564, Validation Loss: 0.6933570483108806Epoch 21/300, Training Loss: 0.6960339063785785, Validation Loss: 0.693356976873351Epoch 22/300, Training Loss: 0.6960337195653943, Validation Loss: 0.693356899571693Epoch 23/300, Training Loss: 0.6960335244218653, Validation Loss: 0.6933568153887506Epoch 24/300, Training Loss: 0.6960333196282636, Validation Loss: 0.6933567231530775Epoch 25/300, Training Loss: 0.696033103721358, Validation Loss: 0.6933566214997378Epoch 26/300, Training Loss: 0.6960328750536549, Validation Loss: 0.6933565088211121Epoch 27/300, Training Loss: 0.6960326317438674, Validation Loss: 0.6933563832044374Epoch 28/300, Training Loss: 0.696032371615661, Validation Loss: 0.6933562423516013Epoch 29/300, Training Loss: 0.6960320921206746, Validation Loss: 0.6933560834749942Epoch 30/300, Training Loss: 0.6960317902403479, Validation Loss: 0.6933559031607276Epoch 31/300, Training Loss: 0.6960314623588982, Validation Loss: 0.6933556971868851Epoch 32/300, Training Loss: 0.6960311040966703, Validation Loss: 0.6933554602790283Epoch 33/300, Training Loss: 0.696030710088324, Validation Loss: 0.693355185776986Epoch 34/300, Training Loss: 0.6960302736832167, Validation Loss: 0.6933548651743096Epoch 35/300, Training Loss: 0.6960297865343023, Validation Loss: 0.693354487472027Epoch 36/300, Training Loss: 0.6960292380245161, Validation Loss: 0.6933540382566916Epoch 37/300, Training Loss: 0.6960286144516789, Validation Loss: 0.6933534983610727Epoch 38/300, Training Loss: 0.6960278978469702, Validation Loss: 0.6933528418792569Epoch 39/300, Training Loss: 0.6960270642240556, Validation Loss: 0.6933520331589811Epoch 40/300, Training Loss: 0.6960260809201693, Validation Loss: 0.6933510221297647Epoch 41/300, Training Loss: 0.6960249024457567, Validation Loss: 0.6933497368407474Epoch 42/300, Training Loss: 0.6960234638018991, Validation Loss: 0.6933480711586067Epoch 43/300, Training Loss: 0.6960216693334187, Validation Loss: 0.6933458637389627Epoch 44/300, Training Loss: 0.6960193733638648, Validation Loss: 0.6933428605456687Epoch 45/300, Training Loss: 0.6960163449253889, Validation Loss: 0.693338644697229Epoch 46/300, Training Loss: 0.6960121998431699, Validation Loss: 0.6933324973151883Epoch 47/300, Training Loss: 0.6960062609525413, Validation Loss: 0.6933231015033048Epoch 48/300, Training Loss: 0.6959972460496265, Validation Loss: 0.6933078559044779Epoch 49/300, Training Loss: 0.6959824967602115, Validation Loss: 0.6932810993596695Epoch 50/300, Training Loss: 0.6959558053321635, Validation Loss: 0.6932288127479138Epoch 51/300, Training Loss: 0.6959000969461415, Validation Loss: 0.693109356926263Epoch 52/300, Training Loss: 0.6957555154780067, Validation Loss: 0.6927595238537313Epoch 53/300, Training Loss: 0.6952067048485864, Validation Loss: 0.6911399553705709Epoch 54/300, Training Loss: 0.6902150532265511, Validation Loss: 0.6681103370529278Epoch 55/300, Training Loss: 0.5500254597220063, Validation Loss: 0.2972148478483684Epoch 56/300, Training Loss: 0.36745881644766976, Validation Loss: 0.2573160063186087Epoch 57/300, Training Loss: 0.35707064827416585, Validation Loss: 0.269993502192473Epoch 58/300, Training Loss: 0.37399518833948947, Validation Loss: 0.28688727835411376Epoch 59/300, Training Loss: 0.3955482541007234, Validation Loss: 0.2989057187735351Epoch 60/300, Training Loss: 0.4198844921041638, Validation Loss: 0.3050306091074134Epoch 61/300, Training Loss: 0.4440955350788309, Validation Loss: 0.3065417507412334Epoch 62/300, Training Loss: 0.4714082935714431, Validation Loss: 0.30555654378170743Epoch 63/300, Training Loss: 0.5014603626721476, Validation Loss: 0.3040271494520718Epoch 64/300, Training Loss: 0.5155016232331713, Validation Loss: 0.3090556651368911Epoch 65/300, Training Loss: 0.5164450052522467, Validation Loss: 0.33854832401918705Epoch 66/300, Training Loss: 0.5947327487939915, Validation Loss: 0.34909837162131235Epoch 67/300, Training Loss: 0.9341585738102726, Validation Loss: 0.4835727351019184Epoch 68/300, Training Loss: nan, Validation Loss: nanEpoch 69/300, Training Loss: nan, Validation Loss: nanEpoch 70/300, Training Loss: nan, Validation Loss: nanEpoch 71/300, Training Loss: nan, Validation Loss: nanEpoch 72/300, Training Loss: nan, Validation Loss: nanEpoch 73/300, Training Loss: nan, Validation Loss: nanEpoch 74/300, Training Loss: nan, Validation Loss: nanEpoch 75/300, Training Loss: nan, Validation Loss: nanEpoch 76/300, Training Loss: nan, Validation Loss: nanEpoch 77/300, Training Loss: nan, Validation Loss: nanEpoch 78/300, Training Loss: nan, Validation Loss: nanEpoch 79/300, Training Loss: nan, Validation Loss: nanEpoch 80/300, Training Loss: nan, Validation Loss: nanEpoch 81/300, Training Loss: nan, Validation Loss: nanEpoch 82/300, Training Loss: nan, Validation Loss: nanEpoch 83/300, Training Loss: nan, Validation Loss: nanEpoch 84/300, Training Loss: nan, Validation Loss: nanEpoch 85/300, Training Loss: nan, Validation Loss: nanEpoch 86/300, Training Loss: nan, Validation Loss: nanEpoch 87/300, Training Loss: nan, Validation Loss: nanEpoch 88/300, Training Loss: nan, Validation Loss: nanEpoch 89/300, Training Loss: nan, Validation Loss: nanEpoch 90/300, Training Loss: nan, Validation Loss: nanEpoch 91/300, Training Loss: nan, Validation Loss: nanEpoch 92/300, Training Loss: nan, Validation Loss: nanEpoch 93/300, Training Loss: nan, Validation Loss: nanEpoch 94/300, Training Loss: nan, Validation Loss: nanEpoch 95/300, Training Loss: nan, Validation Loss: nanEpoch 96/300, Training Loss: nan, Validation Loss: nanEpoch 97/300, Training Loss: nan, Validation Loss: nanEpoch 98/300, Training Loss: nan, Validation Loss: nanEpoch 99/300, Training Loss: nan, Validation Loss: nanEpoch 100/300, Training Loss: nan, Validation Loss: nanEpoch 101/300, Training Loss: nan, Validation Loss: nanEpoch 102/300, Training Loss: nan, Validation Loss: nanEpoch 103/300, Training Loss: nan, Validation Loss: nanEpoch 104/300, Training Loss: nan, Validation Loss: nanEpoch 105/300, Training Loss: nan, Validation Loss: nanEpoch 106/300, Training Loss: nan, Validation Loss: nanEpoch 107/300, Training Loss: nan, Validation Loss: nanEpoch 108/300, Training Loss: nan, Validation Loss: nanEpoch 109/300, Training Loss: nan, Validation Loss: nanEpoch 110/300, Training Loss: nan, Validation Loss: nanEpoch 111/300, Training Loss: nan, Validation Loss: nanEpoch 112/300, Training Loss: nan, Validation Loss: nanEpoch 113/300, Training Loss: nan, Validation Loss: nanEpoch 114/300, Training Loss: nan, Validation Loss: nanEpoch 115/300, Training Loss: nan, Validation Loss: nanEpoch 116/300, Training Loss: nan, Validation Loss: nanEpoch 117/300, Training Loss: nan, Validation Loss: nanEpoch 118/300, Training Loss: nan, Validation Loss: nanEpoch 119/300, Training Loss: nan, Validation Loss: nanEpoch 120/300, Training Loss: nan, Validation Loss: nanEpoch 121/300, Training Loss: nan, Validation Loss: nanEpoch 122/300, Training Loss: nan, Validation Loss: nanEpoch 123/300, Training Loss: nan, Validation Loss: nanEpoch 124/300, Training Loss: nan, Validation Loss: nanEpoch 125/300, Training Loss: nan, Validation Loss: nanEpoch 126/300, Training Loss: nan, Validation Loss: nanEpoch 127/300, Training Loss: nan, Validation Loss: nanEpoch 128/300, Training Loss: nan, Validation Loss: nanEpoch 129/300, Training Loss: nan, Validation Loss: nanEpoch 130/300, Training Loss: nan, Validation Loss: nanEpoch 131/300, Training Loss: nan, Validation Loss: nanEpoch 132/300, Training Loss: nan, Validation Loss: nanEpoch 133/300, Training Loss: nan, Validation Loss: nanEpoch 134/300, Training Loss: nan, Validation Loss: nanEpoch 135/300, Training Loss: nan, Validation Loss: nanEpoch 136/300, Training Loss: nan, Validation Loss: nanEpoch 137/300, Training Loss: nan, Validation Loss: nanEpoch 138/300, Training Loss: nan, Validation Loss: nanEpoch 139/300, Training Loss: nan, Validation Loss: nanEpoch 140/300, Training Loss: nan, Validation Loss: nanEpoch 141/300, Training Loss: nan, Validation Loss: nanEpoch 142/300, Training Loss: nan, Validation Loss: nanEpoch 143/300, Training Loss: nan, Validation Loss: nanEpoch 144/300, Training Loss: nan, Validation Loss: nanEpoch 145/300, Training Loss: nan, Validation Loss: nanEpoch 146/300, Training Loss: nan, Validation Loss: nanEpoch 147/300, Training Loss: nan, Validation Loss: nanEpoch 148/300, Training Loss: nan, Validation Loss: nanEpoch 149/300, Training Loss: nan, Validation Loss: nanEpoch 150/300, Training Loss: nan, Validation Loss: nanEpoch 151/300, Training Loss: nan, Validation Loss: nanEpoch 152/300, Training Loss: nan, Validation Loss: nanEpoch 153/300, Training Loss: nan, Validation Loss: nanEpoch 154/300, Training Loss: nan, Validation Loss: nanEpoch 155/300, Training Loss: nan, Validation Loss: nanEpoch 156/300, Training Loss: nan, Validation Loss: nanEpoch 157/300, Training Loss: nan, Validation Loss: nanEpoch 158/300, Training Loss: nan, Validation Loss: nanEpoch 159/300, Training Loss: nan, Validation Loss: nanEpoch 160/300, Training Loss: nan, Validation Loss: nanEpoch 161/300, Training Loss: nan, Validation Loss: nanEpoch 162/300, Training Loss: nan, Validation Loss: nanEpoch 163/300, Training Loss: nan, Validation Loss: nanEpoch 164/300, Training Loss: nan, Validation Loss: nanEpoch 165/300, Training Loss: nan, Validation Loss: nanEpoch 166/300, Training Loss: nan, Validation Loss: nanEpoch 167/300, Training Loss: nan, Validation Loss: nanEpoch 168/300, Training Loss: nan, Validation Loss: nanEpoch 169/300, Training Loss: nan, Validation Loss: nanEpoch 170/300, Training Loss: nan, Validation Loss: nanEpoch 171/300, Training Loss: nan, Validation Loss: nanEpoch 172/300, Training Loss: nan, Validation Loss: nanEpoch 173/300, Training Loss: nan, Validation Loss: nanEpoch 174/300, Training Loss: nan, Validation Loss: nanEpoch 175/300, Training Loss: nan, Validation Loss: nanEpoch 176/300, Training Loss: nan, Validation Loss: nanEpoch 177/300, Training Loss: nan, Validation Loss: nanEpoch 178/300, Training Loss: nan, Validation Loss: nanEpoch 179/300, Training Loss: nan, Validation Loss: nanEpoch 180/300, Training Loss: nan, Validation Loss: nanEpoch 181/300, Training Loss: nan, Validation Loss: nanEpoch 182/300, Training Loss: nan, Validation Loss: nanEpoch 183/300, Training Loss: nan, Validation Loss: nanEpoch 184/300, Training Loss: nan, Validation Loss: nanEpoch 185/300, Training Loss: nan, Validation Loss: nanEpoch 186/300, Training Loss: nan, Validation Loss: nanEpoch 187/300, Training Loss: nan, Validation Loss: nanEpoch 188/300, Training Loss: nan, Validation Loss: nanEpoch 189/300, Training Loss: nan, Validation Loss: nanEpoch 190/300, Training Loss: nan, Validation Loss: nanEpoch 191/300, Training Loss: nan, Validation Loss: nanEpoch 192/300, Training Loss: nan, Validation Loss: nanEpoch 193/300, Training Loss: nan, Validation Loss: nanEpoch 194/300, Training Loss: nan, Validation Loss: nanEpoch 195/300, Training Loss: nan, Validation Loss: nanEpoch 196/300, Training Loss: nan, Validation Loss: nanEpoch 197/300, Training Loss: nan, Validation Loss: nanEpoch 198/300, Training Loss: nan, Validation Loss: nanEpoch 199/300, Training Loss: nan, Validation Loss: nanEpoch 200/300, Training Loss: nan, Validation Loss: nanEpoch 201/300, Training Loss: nan, Validation Loss: nanEpoch 202/300, Training Loss: nan, Validation Loss: nanEpoch 203/300, Training Loss: nan, Validation Loss: nanEpoch 204/300, Training Loss: nan, Validation Loss: nanEpoch 205/300, Training Loss: nan, Validation Loss: nanEpoch 206/300, Training Loss: nan, Validation Loss: nanEpoch 207/300, Training Loss: nan, Validation Loss: nanEpoch 208/300, Training Loss: nan, Validation Loss: nanEpoch 209/300, Training Loss: nan, Validation Loss: nanEpoch 210/300, Training Loss: nan, Validation Loss: nanEpoch 211/300, Training Loss: nan, Validation Loss: nanEpoch 212/300, Training Loss: nan, Validation Loss: nanEpoch 213/300, Training Loss: nan, Validation Loss: nanEpoch 214/300, Training Loss: nan, Validation Loss: nanEpoch 215/300, Training Loss: nan, Validation Loss: nanEpoch 216/300, Training Loss: nan, Validation Loss: nanEpoch 217/300, Training Loss: nan, Validation Loss: nanEpoch 218/300, Training Loss: nan, Validation Loss: nanEpoch 219/300, Training Loss: nan, Validation Loss: nanEpoch 220/300, Training Loss: nan, Validation Loss: nanEpoch 221/300, Training Loss: nan, Validation Loss: nanEpoch 222/300, Training Loss: nan, Validation Loss: nanEpoch 223/300, Training Loss: nan, Validation Loss: nanEpoch 224/300, Training Loss: nan, Validation Loss: nanEpoch 225/300, Training Loss: nan, Validation Loss: nanEpoch 226/300, Training Loss: nan, Validation Loss: nanEpoch 227/300, Training Loss: nan, Validation Loss: nanEpoch 228/300, Training Loss: nan, Validation Loss: nanEpoch 229/300, Training Loss: nan, Validation Loss: nanEpoch 230/300, Training Loss: nan, Validation Loss: nanEpoch 231/300, Training Loss: nan, Validation Loss: nanEpoch 232/300, Training Loss: nan, Validation Loss: nanEpoch 233/300, Training Loss: nan, Validation Loss: nanEpoch 234/300, Training Loss: nan, Validation Loss: nanEpoch 235/300, Training Loss: nan, Validation Loss: nanEpoch 236/300, Training Loss: nan, Validation Loss: nanEpoch 237/300, Training Loss: nan, Validation Loss: nanEpoch 238/300, Training Loss: nan, Validation Loss: nanEpoch 239/300, Training Loss: nan, Validation Loss: nanEpoch 240/300, Training Loss: nan, Validation Loss: nanEpoch 241/300, Training Loss: nan, Validation Loss: nanEpoch 242/300, Training Loss: nan, Validation Loss: nanEpoch 243/300, Training Loss: nan, Validation Loss: nanEpoch 244/300, Training Loss: nan, Validation Loss: nanEpoch 245/300, Training Loss: nan, Validation Loss: nanEpoch 246/300, Training Loss: nan, Validation Loss: nanEpoch 247/300, Training Loss: nan, Validation Loss: nanEpoch 248/300, Training Loss: nan, Validation Loss: nanEpoch 249/300, Training Loss: nan, Validation Loss: nanEpoch 250/300, Training Loss: nan, Validation Loss: nanEpoch 251/300, Training Loss: nan, Validation Loss: nanEpoch 252/300, Training Loss: nan, Validation Loss: nanEpoch 253/300, Training Loss: nan, Validation Loss: nanEpoch 254/300, Training Loss: nan, Validation Loss: nanEpoch 255/300, Training Loss: nan, Validation Loss: nanEpoch 256/300, Training Loss: nan, Validation Loss: nanEpoch 257/300, Training Loss: nan, Validation Loss: nanEpoch 258/300, Training Loss: nan, Validation Loss: nanEpoch 259/300, Training Loss: nan, Validation Loss: nanEpoch 260/300, Training Loss: nan, Validation Loss: nanEpoch 261/300, Training Loss: nan, Validation Loss: nanEpoch 262/300, Training Loss: nan, Validation Loss: nanEpoch 263/300, Training Loss: nan, Validation Loss: nanEpoch 264/300, Training Loss: nan, Validation Loss: nanEpoch 265/300, Training Loss: nan, Validation Loss: nanEpoch 266/300, Training Loss: nan, Validation Loss: nanEpoch 267/300, Training Loss: nan, Validation Loss: nanEpoch 268/300, Training Loss: nan, Validation Loss: nanEpoch 269/300, Training Loss: nan, Validation Loss: nanEpoch 270/300, Training Loss: nan, Validation Loss: nanEpoch 271/300, Training Loss: nan, Validation Loss: nanEpoch 272/300, Training Loss: nan, Validation Loss: nanEpoch 273/300, Training Loss: nan, Validation Loss: nanEpoch 274/300, Training Loss: nan, Validation Loss: nanEpoch 275/300, Training Loss: nan, Validation Loss: nanEpoch 276/300, Training Loss: nan, Validation Loss: nanEpoch 277/300, Training Loss: nan, Validation Loss: nanEpoch 278/300, Training Loss: nan, Validation Loss: nanEpoch 279/300, Training Loss: nan, Validation Loss: nanEpoch 280/300, Training Loss: nan, Validation Loss: nanEpoch 281/300, Training Loss: nan, Validation Loss: nanEpoch 282/300, Training Loss: nan, Validation Loss: nanEpoch 283/300, Training Loss: nan, Validation Loss: nanEpoch 284/300, Training Loss: nan, Validation Loss: nanEpoch 285/300, Training Loss: nan, Validation Loss: nanEpoch 286/300, Training Loss: nan, Validation Loss: nanEpoch 287/300, Training Loss: nan, Validation Loss: nanEpoch 288/300, Training Loss: nan, Validation Loss: nanEpoch 289/300, Training Loss: nan, Validation Loss: nanEpoch 290/300, Training Loss: nan, Validation Loss: nanEpoch 291/300, Training Loss: nan, Validation Loss: nanEpoch 292/300, Training Loss: nan, Validation Loss: nanEpoch 293/300, Training Loss: nan, Validation Loss: nanEpoch 294/300, Training Loss: nan, Validation Loss: nanEpoch 295/300, Training Loss: nan, Validation Loss: nanEpoch 296/300, Training Loss: nan, Validation Loss: nanEpoch 297/300, Training Loss: nan, Validation Loss: nanEpoch 298/300, Training Loss: nan, Validation Loss: nanEpoch 299/300, Training Loss: nan, Validation Loss: nanEpoch 300/300, Training Loss: nan, Validation Loss: nanEpoch 1/500, Training Loss: 0.6957582396823037, Validation Loss: 0.693268706311425Epoch 2/500, Training Loss: 0.6959814221605001, Validation Loss: 0.6933466620916825Epoch 3/500, Training Loss: 0.6960066062526171, Validation Loss: 0.693351509903776Epoch 4/500, Training Loss: 0.6960102860431512, Validation Loss: 0.6933522302501108Epoch 5/500, Training Loss: 0.6960125851480351, Validation Loss: 0.6933526928963553Epoch 6/500, Training Loss: 0.6960146287443728, Validation Loss: 0.6933531079207441Epoch 7/500, Training Loss: 0.6960164938159878, Validation Loss: 0.6933534893852088Epoch 8/500, Training Loss: 0.696018199592585, Validation Loss: 0.693353840627541Epoch 9/500, Training Loss: 0.6960197606756154, Validation Loss: 0.6933541641569444Epoch 10/500, Training Loss: 0.6960211900583526, Validation Loss: 0.6933544622119101Epoch 11/500, Training Loss: 0.6960224994514369, Validation Loss: 0.6933547368173012Epoch 12/500, Training Loss: 0.6960236994408872, Validation Loss: 0.6933549898104918Epoch 13/500, Training Loss: 0.6960247996169431, Validation Loss: 0.6933552228622324Epoch 14/500, Training Loss: 0.6960258086845145, Validation Loss: 0.6933554374942287Epoch 15/500, Training Loss: 0.6960267345586331, Validation Loss: 0.6933556350940469Epoch 16/500, Training Loss: 0.6960275844473603, Validation Loss: 0.6933558169277866Epoch 17/500, Training Loss: 0.6960283649241458, Validation Loss: 0.6933559841508758Epoch 18/500, Training Loss: 0.6960290819912551, Validation Loss: 0.6933561378172592Epoch 19/500, Training Loss: 0.6960297411356191, Validation Loss: 0.6933562788871938Epoch 20/500, Training Loss: 0.6960303473781978, Validation Loss: 0.6933564082338227Epoch 21/500, Training Loss: 0.6960309053177604, Validation Loss: 0.6933565266486469Epoch 22/500, Training Loss: 0.696031419169855, Validation Loss: 0.6933566348459678Epoch 23/500, Training Loss: 0.6960318928015781, Validation Loss: 0.6933567334663564Epoch 24/500, Training Loss: 0.696032329762653, Validation Loss: 0.693356823079149Epoch 25/500, Training Loss: 0.6960327333132559, Validation Loss: 0.6933569041839438Epoch 26/500, Training Loss: 0.6960331064489226, Validation Loss: 0.693356977211024Epoch 27/500, Training Loss: 0.6960334519228111, Validation Loss: 0.6933570425205906Epoch 28/500, Training Loss: 0.6960337722655221, Validation Loss: 0.6933571004006264Epoch 29/500, Training Loss: 0.6960340698026243, Validation Loss: 0.6933571510631433Epoch 30/500, Training Loss: 0.696034346669939, Validation Loss: 0.6933571946384596Epoch 31/500, Training Loss: 0.6960346048265899, Validation Loss: 0.6933572311670442Epoch 32/500, Training Loss: 0.6960348460657056, Validation Loss: 0.6933572605882847Epoch 33/500, Training Loss: 0.6960350720225553, Validation Loss: 0.6933572827253093Epoch 34/500, Training Loss: 0.6960352841797196, Validation Loss: 0.6933572972646735Epoch 35/500, Training Loss: 0.6960354838687379, Validation Loss: 0.6933573037292875Epoch 36/500, Training Loss: 0.6960356722672725, Validation Loss: 0.6933573014423122Epoch 37/500, Training Loss: 0.6960358503904962, Validation Loss: 0.6933572894788721Epoch 38/500, Training Loss: 0.6960360190747041, Validation Loss: 0.6933572666011065Epoch 39/500, Training Loss: 0.696036178950232, Validation Loss: 0.6933572311701652Epoch 40/500, Training Loss: 0.6960363303993484, Validation Loss: 0.6933571810258854Epoch 41/500, Training Loss: 0.6960364734926083, Validation Loss: 0.6933571133205298Epoch 42/500, Training Loss: 0.6960366078938013, Validation Loss: 0.6933570242862788Epoch 43/500, Training Loss: 0.6960367327183337, Validation Loss: 0.6933569089056705Epoch 44/500, Training Loss: 0.6960368463213417, Validation Loss: 0.693356760437434Epoch 45/500, Training Loss: 0.6960369459779052, Validation Loss: 0.6933565697228097Epoch 46/500, Training Loss: 0.696037027394343, Validation Loss: 0.6933563241518277Epoch 47/500, Training Loss: 0.696037083949572, Validation Loss: 0.6933560060908912Epoch 48/500, Training Loss: 0.6960371054950901, Validation Loss: 0.6933555904355057Epoch 49/500, Training Loss: 0.6960370764146158, Validation Loss: 0.693355040702264Epoch 50/500, Training Loss: 0.6960369724058866, Validation Loss: 0.6933543026046511Epoch 51/500, Training Loss: 0.696036754984058, Validation Loss: 0.6933532931388957Epoch 52/500, Training Loss: 0.696036361769175, Validation Loss: 0.6933518813281248Epoch 53/500, Training Loss: 0.6960356886310486, Validation Loss: 0.6933498527302012Epoch 54/500, Training Loss: 0.696034555301324, Validation Loss: 0.6933468405767396Epoch 55/500, Training Loss: 0.6960326353722179, Validation Loss: 0.6933421837672875Epoch 56/500, Training Loss: 0.696029303930149, Validation Loss: 0.6933346115743504Epoch 57/500, Training Loss: 0.6960232773541668, Validation Loss: 0.6933214765637954Epoch 58/500, Training Loss: 0.6960116679041566, Validation Loss: 0.6932966578213826Epoch 59/500, Training Loss: 0.6959871375272622, Validation Loss: 0.6932438740827553Epoch 60/500, Training Loss: 0.6959275450585597, Validation Loss: 0.6931102288272545Epoch 61/500, Training Loss: 0.6957460190306927, Validation Loss: 0.6926609000546422Epoch 62/500, Training Loss: 0.6949004586022249, Validation Loss: 0.6900509514884301Epoch 63/500, Training Loss: 0.6830659395424604, Validation Loss: 0.6263302707034535Epoch 64/500, Training Loss: 0.48369306530963074, Validation Loss: 0.28500995938821844Epoch 65/500, Training Loss: 0.373507823698612, Validation Loss: 0.2735177546415984Epoch 66/500, Training Loss: 0.379705169917959, Validation Loss: 0.29227462092162215Epoch 67/500, Training Loss: 0.400835938196422, Validation Loss: 0.3070891412527916Epoch 68/500, Training Loss: 0.4209757719138964, Validation Loss: 0.3142494120973656Epoch 69/500, Training Loss: 0.43753150555380554, Validation Loss: 0.3164585477491749Epoch 70/500, Training Loss: 0.44916990235913384, Validation Loss: 0.3166104076124593Epoch 71/500, Training Loss: 0.4660300367630308, Validation Loss: 0.31319330593642414Epoch 72/500, Training Loss: 0.4953997327143839, Validation Loss: 0.3079381024731688Epoch 73/500, Training Loss: 0.4967662454567872, Validation Loss: 0.3134662915308811Epoch 74/500, Training Loss: 0.5696041744583027, Validation Loss: 0.33886665571818925Epoch 75/500, Training Loss: 0.6721349979642106, Validation Loss: 0.35289525764046453Epoch 76/500, Training Loss: 0.9280997545870069, Validation Loss: 0.49076083977704715Epoch 77/500, Training Loss: nan, Validation Loss: nanEpoch 78/500, Training Loss: nan, Validation Loss: 0.8189215992907943Epoch 79/500, Training Loss: nan, Validation Loss: nanEpoch 80/500, Training Loss: nan, Validation Loss: nanEpoch 81/500, Training Loss: nan, Validation Loss: nanEpoch 82/500, Training Loss: nan, Validation Loss: nanEpoch 83/500, Training Loss: nan, Validation Loss: nanEpoch 84/500, Training Loss: nan, Validation Loss: nanEpoch 85/500, Training Loss: nan, Validation Loss: nanEpoch 86/500, Training Loss: nan, Validation Loss: nanEpoch 87/500, Training Loss: nan, Validation Loss: nanEpoch 88/500, Training Loss: nan, Validation Loss: nanEpoch 89/500, Training Loss: nan, Validation Loss: nanEpoch 90/500, Training Loss: nan, Validation Loss: nanEpoch 91/500, Training Loss: nan, Validation Loss: nanEpoch 92/500, Training Loss: nan, Validation Loss: nanEpoch 93/500, Training Loss: nan, Validation Loss: nanEpoch 94/500, Training Loss: nan, Validation Loss: nanEpoch 95/500, Training Loss: nan, Validation Loss: nanEpoch 96/500, Training Loss: nan, Validation Loss: nanEpoch 97/500, Training Loss: nan, Validation Loss: nanEpoch 98/500, Training Loss: nan, Validation Loss: nanEpoch 99/500, Training Loss: nan, Validation Loss: nanEpoch 100/500, Training Loss: nan, Validation Loss: nanEpoch 101/500, Training Loss: nan, Validation Loss: nanEpoch 102/500, Training Loss: nan, Validation Loss: nanEpoch 103/500, Training Loss: nan, Validation Loss: nanEpoch 104/500, Training Loss: nan, Validation Loss: nanEpoch 105/500, Training Loss: nan, Validation Loss: nanEpoch 106/500, Training Loss: nan, Validation Loss: nanEpoch 107/500, Training Loss: nan, Validation Loss: nanEpoch 108/500, Training Loss: nan, Validation Loss: nanEpoch 109/500, Training Loss: nan, Validation Loss: nanEpoch 110/500, Training Loss: nan, Validation Loss: nanEpoch 111/500, Training Loss: nan, Validation Loss: nanEpoch 112/500, Training Loss: nan, Validation Loss: nanEpoch 113/500, Training Loss: nan, Validation Loss: nanEpoch 114/500, Training Loss: nan, Validation Loss: nanEpoch 115/500, Training Loss: nan, Validation Loss: nanEpoch 116/500, Training Loss: nan, Validation Loss: nanEpoch 117/500, Training Loss: nan, Validation Loss: nanEpoch 118/500, Training Loss: nan, Validation Loss: nanEpoch 119/500, Training Loss: nan, Validation Loss: nanEpoch 120/500, Training Loss: nan, Validation Loss: nanEpoch 121/500, Training Loss: nan, Validation Loss: nanEpoch 122/500, Training Loss: nan, Validation Loss: nanEpoch 123/500, Training Loss: nan, Validation Loss: nanEpoch 124/500, Training Loss: nan, Validation Loss: nanEpoch 125/500, Training Loss: nan, Validation Loss: nanEpoch 126/500, Training Loss: nan, Validation Loss: nanEpoch 127/500, Training Loss: nan, Validation Loss: nanEpoch 128/500, Training Loss: nan, Validation Loss: nanEpoch 129/500, Training Loss: nan, Validation Loss: nanEpoch 130/500, Training Loss: nan, Validation Loss: nanEpoch 131/500, Training Loss: nan, Validation Loss: nanEpoch 132/500, Training Loss: nan, Validation Loss: nanEpoch 133/500, Training Loss: nan, Validation Loss: nanEpoch 134/500, Training Loss: nan, Validation Loss: nanEpoch 135/500, Training Loss: nan, Validation Loss: nanEpoch 136/500, Training Loss: nan, Validation Loss: nanEpoch 137/500, Training Loss: nan, Validation Loss: nanEpoch 138/500, Training Loss: nan, Validation Loss: nanEpoch 139/500, Training Loss: nan, Validation Loss: nanEpoch 140/500, Training Loss: nan, Validation Loss: nanEpoch 141/500, Training Loss: nan, Validation Loss: nanEpoch 142/500, Training Loss: nan, Validation Loss: nanEpoch 143/500, Training Loss: nan, Validation Loss: nanEpoch 144/500, Training Loss: nan, Validation Loss: nanEpoch 145/500, Training Loss: nan, Validation Loss: nanEpoch 146/500, Training Loss: nan, Validation Loss: nanEpoch 147/500, Training Loss: nan, Validation Loss: nanEpoch 148/500, Training Loss: nan, Validation Loss: nanEpoch 149/500, Training Loss: nan, Validation Loss: nanEpoch 150/500, Training Loss: nan, Validation Loss: nanEpoch 151/500, Training Loss: nan, Validation Loss: nanEpoch 152/500, Training Loss: nan, Validation Loss: nanEpoch 153/500, Training Loss: nan, Validation Loss: nanEpoch 154/500, Training Loss: nan, Validation Loss: nanEpoch 155/500, Training Loss: nan, Validation Loss: nanEpoch 156/500, Training Loss: nan, Validation Loss: nanEpoch 157/500, Training Loss: nan, Validation Loss: nanEpoch 158/500, Training Loss: nan, Validation Loss: nanEpoch 159/500, Training Loss: nan, Validation Loss: nanEpoch 160/500, Training Loss: nan, Validation Loss: nanEpoch 161/500, Training Loss: nan, Validation Loss: nanEpoch 162/500, Training Loss: nan, Validation Loss: nanEpoch 163/500, Training Loss: nan, Validation Loss: nanEpoch 164/500, Training Loss: nan, Validation Loss: nanEpoch 165/500, Training Loss: nan, Validation Loss: nanEpoch 166/500, Training Loss: nan, Validation Loss: nanEpoch 167/500, Training Loss: nan, Validation Loss: nanEpoch 168/500, Training Loss: nan, Validation Loss: nanEpoch 169/500, Training Loss: nan, Validation Loss: nanEpoch 170/500, Training Loss: nan, Validation Loss: nanEpoch 171/500, Training Loss: nan, Validation Loss: nanEpoch 172/500, Training Loss: nan, Validation Loss: nanEpoch 173/500, Training Loss: nan, Validation Loss: nanEpoch 174/500, Training Loss: nan, Validation Loss: nanEpoch 175/500, Training Loss: nan, Validation Loss: nanEpoch 176/500, Training Loss: nan, Validation Loss: nanEpoch 177/500, Training Loss: nan, Validation Loss: nanEpoch 178/500, Training Loss: nan, Validation Loss: nanEpoch 179/500, Training Loss: nan, Validation Loss: nanEpoch 180/500, Training Loss: nan, Validation Loss: nanEpoch 181/500, Training Loss: nan, Validation Loss: nanEpoch 182/500, Training Loss: nan, Validation Loss: nanEpoch 183/500, Training Loss: nan, Validation Loss: nanEpoch 184/500, Training Loss: nan, Validation Loss: nanEpoch 185/500, Training Loss: nan, Validation Loss: nanEpoch 186/500, Training Loss: nan, Validation Loss: nanEpoch 187/500, Training Loss: nan, Validation Loss: nanEpoch 188/500, Training Loss: nan, Validation Loss: nanEpoch 189/500, Training Loss: nan, Validation Loss: nanEpoch 190/500, Training Loss: nan, Validation Loss: nanEpoch 191/500, Training Loss: nan, Validation Loss: nanEpoch 192/500, Training Loss: nan, Validation Loss: nanEpoch 193/500, Training Loss: nan, Validation Loss: nanEpoch 194/500, Training Loss: nan, Validation Loss: nanEpoch 195/500, Training Loss: nan, Validation Loss: nanEpoch 196/500, Training Loss: nan, Validation Loss: nanEpoch 197/500, Training Loss: nan, Validation Loss: nanEpoch 198/500, Training Loss: nan, Validation Loss: nanEpoch 199/500, Training Loss: nan, Validation Loss: nanEpoch 200/500, Training Loss: nan, Validation Loss: nanEpoch 201/500, Training Loss: nan, Validation Loss: nanEpoch 202/500, Training Loss: nan, Validation Loss: nanEpoch 203/500, Training Loss: nan, Validation Loss: nanEpoch 204/500, Training Loss: nan, Validation Loss: nanEpoch 205/500, Training Loss: nan, Validation Loss: nanEpoch 206/500, Training Loss: nan, Validation Loss: nanEpoch 207/500, Training Loss: nan, Validation Loss: nanEpoch 208/500, Training Loss: nan, Validation Loss: nanEpoch 209/500, Training Loss: nan, Validation Loss: nanEpoch 210/500, Training Loss: nan, Validation Loss: nanEpoch 211/500, Training Loss: nan, Validation Loss: nanEpoch 212/500, Training Loss: nan, Validation Loss: nanEpoch 213/500, Training Loss: nan, Validation Loss: nanEpoch 214/500, Training Loss: nan, Validation Loss: nanEpoch 215/500, Training Loss: nan, Validation Loss: nanEpoch 216/500, Training Loss: nan, Validation Loss: nanEpoch 217/500, Training Loss: nan, Validation Loss: nanEpoch 218/500, Training Loss: nan, Validation Loss: nanEpoch 219/500, Training Loss: nan, Validation Loss: nanEpoch 220/500, Training Loss: nan, Validation Loss: nanEpoch 221/500, Training Loss: nan, Validation Loss: nanEpoch 222/500, Training Loss: nan, Validation Loss: nanEpoch 223/500, Training Loss: nan, Validation Loss: nanEpoch 224/500, Training Loss: nan, Validation Loss: nanEpoch 225/500, Training Loss: nan, Validation Loss: nanEpoch 226/500, Training Loss: nan, Validation Loss: nanEpoch 227/500, Training Loss: nan, Validation Loss: nanEpoch 228/500, Training Loss: nan, Validation Loss: nanEpoch 229/500, Training Loss: nan, Validation Loss: nanEpoch 230/500, Training Loss: nan, Validation Loss: nanEpoch 231/500, Training Loss: nan, Validation Loss: nanEpoch 232/500, Training Loss: nan, Validation Loss: nanEpoch 233/500, Training Loss: nan, Validation Loss: nanEpoch 234/500, Training Loss: nan, Validation Loss: nanEpoch 235/500, Training Loss: nan, Validation Loss: nanEpoch 236/500, Training Loss: nan, Validation Loss: nanEpoch 237/500, Training Loss: nan, Validation Loss: nanEpoch 238/500, Training Loss: nan, Validation Loss: nanEpoch 239/500, Training Loss: nan, Validation Loss: nanEpoch 240/500, Training Loss: nan, Validation Loss: nanEpoch 241/500, Training Loss: nan, Validation Loss: nanEpoch 242/500, Training Loss: nan, Validation Loss: nanEpoch 243/500, Training Loss: nan, Validation Loss: nanEpoch 244/500, Training Loss: nan, Validation Loss: nanEpoch 245/500, Training Loss: nan, Validation Loss: nanEpoch 246/500, Training Loss: nan, Validation Loss: nanEpoch 247/500, Training Loss: nan, Validation Loss: nanEpoch 248/500, Training Loss: nan, Validation Loss: nanEpoch 249/500, Training Loss: nan, Validation Loss: nanEpoch 250/500, Training Loss: nan, Validation Loss: nanEpoch 251/500, Training Loss: nan, Validation Loss: nanEpoch 252/500, Training Loss: nan, Validation Loss: nanEpoch 253/500, Training Loss: nan, Validation Loss: nanEpoch 254/500, Training Loss: nan, Validation Loss: nanEpoch 255/500, Training Loss: nan, Validation Loss: nanEpoch 256/500, Training Loss: nan, Validation Loss: nanEpoch 257/500, Training Loss: nan, Validation Loss: nanEpoch 258/500, Training Loss: nan, Validation Loss: nanEpoch 259/500, Training Loss: nan, Validation Loss: nanEpoch 260/500, Training Loss: nan, Validation Loss: nanEpoch 261/500, Training Loss: nan, Validation Loss: nanEpoch 262/500, Training Loss: nan, Validation Loss: nanEpoch 263/500, Training Loss: nan, Validation Loss: nanEpoch 264/500, Training Loss: nan, Validation Loss: nanEpoch 265/500, Training Loss: nan, Validation Loss: nanEpoch 266/500, Training Loss: nan, Validation Loss: nanEpoch 267/500, Training Loss: nan, Validation Loss: nanEpoch 268/500, Training Loss: nan, Validation Loss: nanEpoch 269/500, Training Loss: nan, Validation Loss: nanEpoch 270/500, Training Loss: nan, Validation Loss: nanEpoch 271/500, Training Loss: nan, Validation Loss: nanEpoch 272/500, Training Loss: nan, Validation Loss: nanEpoch 273/500, Training Loss: nan, Validation Loss: nanEpoch 274/500, Training Loss: nan, Validation Loss: nanEpoch 275/500, Training Loss: nan, Validation Loss: nanEpoch 276/500, Training Loss: nan, Validation Loss: nanEpoch 277/500, Training Loss: nan, Validation Loss: nanEpoch 278/500, Training Loss: nan, Validation Loss: nanEpoch 279/500, Training Loss: nan, Validation Loss: nanEpoch 280/500, Training Loss: nan, Validation Loss: nanEpoch 281/500, Training Loss: nan, Validation Loss: nanEpoch 282/500, Training Loss: nan, Validation Loss: nanEpoch 283/500, Training Loss: nan, Validation Loss: nanEpoch 284/500, Training Loss: nan, Validation Loss: nanEpoch 285/500, Training Loss: nan, Validation Loss: nanEpoch 286/500, Training Loss: nan, Validation Loss: nanEpoch 287/500, Training Loss: nan, Validation Loss: nanEpoch 288/500, Training Loss: nan, Validation Loss: nanEpoch 289/500, Training Loss: nan, Validation Loss: nanEpoch 290/500, Training Loss: nan, Validation Loss: nanEpoch 291/500, Training Loss: nan, Validation Loss: nanEpoch 292/500, Training Loss: nan, Validation Loss: nanEpoch 293/500, Training Loss: nan, Validation Loss: nanEpoch 294/500, Training Loss: nan, Validation Loss: nanEpoch 295/500, Training Loss: nan, Validation Loss: nanEpoch 296/500, Training Loss: nan, Validation Loss: nanEpoch 297/500, Training Loss: nan, Validation Loss: nanEpoch 298/500, Training Loss: nan, Validation Loss: nanEpoch 299/500, Training Loss: nan, Validation Loss: nanEpoch 300/500, Training Loss: nan, Validation Loss: nanEpoch 301/500, Training Loss: nan, Validation Loss: nanEpoch 302/500, Training Loss: nan, Validation Loss: nanEpoch 303/500, Training Loss: nan, Validation Loss: nanEpoch 304/500, Training Loss: nan, Validation Loss: nanEpoch 305/500, Training Loss: nan, Validation Loss: nanEpoch 306/500, Training Loss: nan, Validation Loss: nanEpoch 307/500, Training Loss: nan, Validation Loss: nanEpoch 308/500, Training Loss: nan, Validation Loss: nanEpoch 309/500, Training Loss: nan, Validation Loss: nanEpoch 310/500, Training Loss: nan, Validation Loss: nanEpoch 311/500, Training Loss: nan, Validation Loss: nanEpoch 312/500, Training Loss: nan, Validation Loss: nanEpoch 313/500, Training Loss: nan, Validation Loss: nanEpoch 314/500, Training Loss: nan, Validation Loss: nanEpoch 315/500, Training Loss: nan, Validation Loss: nanEpoch 316/500, Training Loss: nan, Validation Loss: nanEpoch 317/500, Training Loss: nan, Validation Loss: nanEpoch 318/500, Training Loss: nan, Validation Loss: nanEpoch 319/500, Training Loss: nan, Validation Loss: nanEpoch 320/500, Training Loss: nan, Validation Loss: nanEpoch 321/500, Training Loss: nan, Validation Loss: nanEpoch 322/500, Training Loss: nan, Validation Loss: nanEpoch 323/500, Training Loss: nan, Validation Loss: nanEpoch 324/500, Training Loss: nan, Validation Loss: nanEpoch 325/500, Training Loss: nan, Validation Loss: nanEpoch 326/500, Training Loss: nan, Validation Loss: nanEpoch 327/500, Training Loss: nan, Validation Loss: nanEpoch 328/500, Training Loss: nan, Validation Loss: nanEpoch 329/500, Training Loss: nan, Validation Loss: nanEpoch 330/500, Training Loss: nan, Validation Loss: nanEpoch 331/500, Training Loss: nan, Validation Loss: nanEpoch 332/500, Training Loss: nan, Validation Loss: nanEpoch 333/500, Training Loss: nan, Validation Loss: nanEpoch 334/500, Training Loss: nan, Validation Loss: nanEpoch 335/500, Training Loss: nan, Validation Loss: nanEpoch 336/500, Training Loss: nan, Validation Loss: nanEpoch 337/500, Training Loss: nan, Validation Loss: nanEpoch 338/500, Training Loss: nan, Validation Loss: nanEpoch 339/500, Training Loss: nan, Validation Loss: nanEpoch 340/500, Training Loss: nan, Validation Loss: nanEpoch 341/500, Training Loss: nan, Validation Loss: nanEpoch 342/500, Training Loss: nan, Validation Loss: nanEpoch 343/500, Training Loss: nan, Validation Loss: nanEpoch 344/500, Training Loss: nan, Validation Loss: nanEpoch 345/500, Training Loss: nan, Validation Loss: nanEpoch 346/500, Training Loss: nan, Validation Loss: nanEpoch 347/500, Training Loss: nan, Validation Loss: nanEpoch 348/500, Training Loss: nan, Validation Loss: nanEpoch 349/500, Training Loss: nan, Validation Loss: nanEpoch 350/500, Training Loss: nan, Validation Loss: nanEpoch 351/500, Training Loss: nan, Validation Loss: nanEpoch 352/500, Training Loss: nan, Validation Loss: nanEpoch 353/500, Training Loss: nan, Validation Loss: nanEpoch 354/500, Training Loss: nan, Validation Loss: nanEpoch 355/500, Training Loss: nan, Validation Loss: nanEpoch 356/500, Training Loss: nan, Validation Loss: nanEpoch 357/500, Training Loss: nan, Validation Loss: nanEpoch 358/500, Training Loss: nan, Validation Loss: nanEpoch 359/500, Training Loss: nan, Validation Loss: nanEpoch 360/500, Training Loss: nan, Validation Loss: nanEpoch 361/500, Training Loss: nan, Validation Loss: nanEpoch 362/500, Training Loss: nan, Validation Loss: nanEpoch 363/500, Training Loss: nan, Validation Loss: nanEpoch 364/500, Training Loss: nan, Validation Loss: nanEpoch 365/500, Training Loss: nan, Validation Loss: nanEpoch 366/500, Training Loss: nan, Validation Loss: nanEpoch 367/500, Training Loss: nan, Validation Loss: nanEpoch 368/500, Training Loss: nan, Validation Loss: nanEpoch 369/500, Training Loss: nan, Validation Loss: nanEpoch 370/500, Training Loss: nan, Validation Loss: nanEpoch 371/500, Training Loss: nan, Validation Loss: nanEpoch 372/500, Training Loss: nan, Validation Loss: nanEpoch 373/500, Training Loss: nan, Validation Loss: nanEpoch 374/500, Training Loss: nan, Validation Loss: nanEpoch 375/500, Training Loss: nan, Validation Loss: nanEpoch 376/500, Training Loss: nan, Validation Loss: nanEpoch 377/500, Training Loss: nan, Validation Loss: nanEpoch 378/500, Training Loss: nan, Validation Loss: nanEpoch 379/500, Training Loss: nan, Validation Loss: nanEpoch 380/500, Training Loss: nan, Validation Loss: nanEpoch 381/500, Training Loss: nan, Validation Loss: nanEpoch 382/500, Training Loss: nan, Validation Loss: nanEpoch 383/500, Training Loss: nan, Validation Loss: nanEpoch 384/500, Training Loss: nan, Validation Loss: nanEpoch 385/500, Training Loss: nan, Validation Loss: nanEpoch 386/500, Training Loss: nan, Validation Loss: nanEpoch 387/500, Training Loss: nan, Validation Loss: nanEpoch 388/500, Training Loss: nan, Validation Loss: nanEpoch 389/500, Training Loss: nan, Validation Loss: nanEpoch 390/500, Training Loss: nan, Validation Loss: nanEpoch 391/500, Training Loss: nan, Validation Loss: nanEpoch 392/500, Training Loss: nan, Validation Loss: nanEpoch 393/500, Training Loss: nan, Validation Loss: nanEpoch 394/500, Training Loss: nan, Validation Loss: nanEpoch 395/500, Training Loss: nan, Validation Loss: nanEpoch 396/500, Training Loss: nan, Validation Loss: nanEpoch 397/500, Training Loss: nan, Validation Loss: nanEpoch 398/500, Training Loss: nan, Validation Loss: nanEpoch 399/500, Training Loss: nan, Validation Loss: nanEpoch 400/500, Training Loss: nan, Validation Loss: nanEpoch 401/500, Training Loss: nan, Validation Loss: nanEpoch 402/500, Training Loss: nan, Validation Loss: nanEpoch 403/500, Training Loss: nan, Validation Loss: nanEpoch 404/500, Training Loss: nan, Validation Loss: nanEpoch 405/500, Training Loss: nan, Validation Loss: nanEpoch 406/500, Training Loss: nan, Validation Loss: nanEpoch 407/500, Training Loss: nan, Validation Loss: nanEpoch 408/500, Training Loss: nan, Validation Loss: nanEpoch 409/500, Training Loss: nan, Validation Loss: nanEpoch 410/500, Training Loss: nan, Validation Loss: nanEpoch 411/500, Training Loss: nan, Validation Loss: nanEpoch 412/500, Training Loss: nan, Validation Loss: nanEpoch 413/500, Training Loss: nan, Validation Loss: nanEpoch 414/500, Training Loss: nan, Validation Loss: nanEpoch 415/500, Training Loss: nan, Validation Loss: nanEpoch 416/500, Training Loss: nan, Validation Loss: nanEpoch 417/500, Training Loss: nan, Validation Loss: nanEpoch 418/500, Training Loss: nan, Validation Loss: nanEpoch 419/500, Training Loss: nan, Validation Loss: nanEpoch 420/500, Training Loss: nan, Validation Loss: nanEpoch 421/500, Training Loss: nan, Validation Loss: nanEpoch 422/500, Training Loss: nan, Validation Loss: nanEpoch 423/500, Training Loss: nan, Validation Loss: nanEpoch 424/500, Training Loss: nan, Validation Loss: nanEpoch 425/500, Training Loss: nan, Validation Loss: nanEpoch 426/500, Training Loss: nan, Validation Loss: nanEpoch 427/500, Training Loss: nan, Validation Loss: nanEpoch 428/500, Training Loss: nan, Validation Loss: nanEpoch 429/500, Training Loss: nan, Validation Loss: nanEpoch 430/500, Training Loss: nan, Validation Loss: nanEpoch 431/500, Training Loss: nan, Validation Loss: nanEpoch 432/500, Training Loss: nan, Validation Loss: nanEpoch 433/500, Training Loss: nan, Validation Loss: nanEpoch 434/500, Training Loss: nan, Validation Loss: nanEpoch 435/500, Training Loss: nan, Validation Loss: nanEpoch 436/500, Training Loss: nan, Validation Loss: nanEpoch 437/500, Training Loss: nan, Validation Loss: nanEpoch 438/500, Training Loss: nan, Validation Loss: nanEpoch 439/500, Training Loss: nan, Validation Loss: nanEpoch 440/500, Training Loss: nan, Validation Loss: nanEpoch 441/500, Training Loss: nan, Validation Loss: nanEpoch 442/500, Training Loss: nan, Validation Loss: nanEpoch 443/500, Training Loss: nan, Validation Loss: nanEpoch 444/500, Training Loss: nan, Validation Loss: nanEpoch 445/500, Training Loss: nan, Validation Loss: nanEpoch 446/500, Training Loss: nan, Validation Loss: nanEpoch 447/500, Training Loss: nan, Validation Loss: nanEpoch 448/500, Training Loss: nan, Validation Loss: nanEpoch 449/500, Training Loss: nan, Validation Loss: nanEpoch 450/500, Training Loss: nan, Validation Loss: nanEpoch 451/500, Training Loss: nan, Validation Loss: nanEpoch 452/500, Training Loss: nan, Validation Loss: nanEpoch 453/500, Training Loss: nan, Validation Loss: nanEpoch 454/500, Training Loss: nan, Validation Loss: nanEpoch 455/500, Training Loss: nan, Validation Loss: nanEpoch 456/500, Training Loss: nan, Validation Loss: nanEpoch 457/500, Training Loss: nan, Validation Loss: nanEpoch 458/500, Training Loss: nan, Validation Loss: nanEpoch 459/500, Training Loss: nan, Validation Loss: nanEpoch 460/500, Training Loss: nan, Validation Loss: nanEpoch 461/500, Training Loss: nan, Validation Loss: nanEpoch 462/500, Training Loss: nan, Validation Loss: nanEpoch 463/500, Training Loss: nan, Validation Loss: nanEpoch 464/500, Training Loss: nan, Validation Loss: nanEpoch 465/500, Training Loss: nan, Validation Loss: nanEpoch 466/500, Training Loss: nan, Validation Loss: nanEpoch 467/500, Training Loss: nan, Validation Loss: nanEpoch 468/500, Training Loss: nan, Validation Loss: nanEpoch 469/500, Training Loss: nan, Validation Loss: nanEpoch 470/500, Training Loss: nan, Validation Loss: nanEpoch 471/500, Training Loss: nan, Validation Loss: nanEpoch 472/500, Training Loss: nan, Validation Loss: nanEpoch 473/500, Training Loss: nan, Validation Loss: nanEpoch 474/500, Training Loss: nan, Validation Loss: nanEpoch 475/500, Training Loss: nan, Validation Loss: nanEpoch 476/500, Training Loss: nan, Validation Loss: nanEpoch 477/500, Training Loss: nan, Validation Loss: nanEpoch 478/500, Training Loss: nan, Validation Loss: nanEpoch 479/500, Training Loss: nan, Validation Loss: nanEpoch 480/500, Training Loss: nan, Validation Loss: nanEpoch 481/500, Training Loss: nan, Validation Loss: nanEpoch 482/500, Training Loss: nan, Validation Loss: nanEpoch 483/500, Training Loss: nan, Validation Loss: nanEpoch 484/500, Training Loss: nan, Validation Loss: nanEpoch 485/500, Training Loss: nan, Validation Loss: nanEpoch 486/500, Training Loss: nan, Validation Loss: nanEpoch 487/500, Training Loss: nan, Validation Loss: nanEpoch 488/500, Training Loss: nan, Validation Loss: nanEpoch 489/500, Training Loss: nan, Validation Loss: nanEpoch 490/500, Training Loss: nan, Validation Loss: nanEpoch 491/500, Training Loss: nan, Validation Loss: nanEpoch 492/500, Training Loss: nan, Validation Loss: nanEpoch 493/500, Training Loss: nan, Validation Loss: nanEpoch 494/500, Training Loss: nan, Validation Loss: nanEpoch 495/500, Training Loss: nan, Validation Loss: nanEpoch 496/500, Training Loss: nan, Validation Loss: nanEpoch 497/500, Training Loss: nan, Validation Loss: nanEpoch 498/500, Training Loss: nan, Validation Loss: nanEpoch 499/500, Training Loss: nan, Validation Loss: nanEpoch 500/500, Training Loss: nan, Validation Loss: nanEpoch 1/800, Training Loss: 0.6957865499704967, Validation Loss: 0.6932727138580932Epoch 2/800, Training Loss: 0.6960109671498175, Validation Loss: 0.6933517078179263Epoch 3/800, Training Loss: 0.6960338265127157, Validation Loss: 0.6933560535114782Epoch 4/800, Training Loss: 0.6960350495879422, Validation Loss: 0.6933563618503238Epoch 5/800, Training Loss: 0.6960351260318529, Validation Loss: 0.6933564556318069Epoch 6/800, Training Loss: 0.6960351436110418, Validation Loss: 0.6933565331152117Epoch 7/800, Training Loss: 0.6960351583481589, Validation Loss: 0.693356604843315Epoch 8/800, Training Loss: 0.6960351729195269, Validation Loss: 0.6933566716363241Epoch 9/800, Training Loss: 0.6960351872687528, Validation Loss: 0.6933567337894255Epoch 10/800, Training Loss: 0.6960352012189597, Validation Loss: 0.6933567915499436Epoch 11/800, Training Loss: 0.6960352146055692, Validation Loss: 0.6933568451437816Epoch 12/800, Training Loss: 0.6960352272811472, Validation Loss: 0.6933568947783394Epoch 13/800, Training Loss: 0.6960352391137468, Validation Loss: 0.6933569406439852Epoch 14/800, Training Loss: 0.6960352499850218, Validation Loss: 0.6933569829152598Epoch 15/800, Training Loss: 0.6960352597884312, Validation Loss: 0.6933570217518883Epoch 16/800, Training Loss: 0.6960352684275394, Validation Loss: 0.6933570572996266Epoch 17/800, Training Loss: 0.6960352758144079, Validation Loss: 0.6933570896909617Epoch 18/800, Training Loss: 0.6960352818680676, Validation Loss: 0.6933571190456724Epoch 19/800, Training Loss: 0.6960352865130718, Validation Loss: 0.6933571454712698Epoch 20/800, Training Loss: 0.6960352896781068, Validation Loss: 0.693357169063321Epoch 21/800, Training Loss: 0.6960352912946653, Validation Loss: 0.6933571899056623Epoch 22/800, Training Loss: 0.6960352912957609, Validation Loss: 0.6933572080705103Epoch 23/800, Training Loss: 0.6960352896146743, Validation Loss: 0.6933572236184664Epoch 24/800, Training Loss: 0.6960352861837233, Validation Loss: 0.6933572365984129Epoch 25/800, Training Loss: 0.6960352809330439, Validation Loss: 0.6933572470473054Epoch 26/800, Training Loss: 0.6960352737893591, Validation Loss: 0.6933572549898427Epoch 27/800, Training Loss: 0.6960352646747306, Validation Loss: 0.6933572604380179Epoch 28/800, Training Loss: 0.6960352535052716, Validation Loss: 0.6933572633905285Epoch 29/800, Training Loss: 0.696035240189799, Validation Loss: 0.6933572638320346Epoch 30/800, Training Loss: 0.6960352246284137, Validation Loss: 0.6933572617322452Epoch 31/800, Training Loss: 0.6960352067109723, Validation Loss: 0.693357257044809Epoch 32/800, Training Loss: 0.696035186315429, Validation Loss: 0.6933572497059773Epoch 33/800, Training Loss: 0.6960351633060224, Validation Loss: 0.6933572396330032Epoch 34/800, Training Loss: 0.6960351375312591, Validation Loss: 0.6933572267222342Epoch 35/800, Training Loss: 0.6960351088216553, Validation Loss: 0.6933572108468431Epoch 36/800, Training Loss: 0.6960350769871905, Validation Loss: 0.6933571918541285Epoch 37/800, Training Loss: 0.6960350418144114, Validation Loss: 0.693357169562306Epoch 38/800, Training Loss: 0.6960350030630966, Validation Loss: 0.6933571437566828Epoch 39/800, Training Loss: 0.6960349604624299, Validation Loss: 0.6933571141851063Epoch 40/800, Training Loss: 0.6960349137065373, Validation Loss: 0.693357080552508Epoch 41/800, Training Loss: 0.6960348624492809, Validation Loss: 0.6933570425143765Epoch 42/800, Training Loss: 0.6960348062981432, Validation Loss: 0.6933569996688976Epoch 43/800, Training Loss: 0.6960347448069973, Validation Loss: 0.6933569515474698Epoch 44/800, Training Loss: 0.6960346774675229, Validation Loss: 0.6933568976032054Epoch 45/800, Training Loss: 0.6960346036989481, Validation Loss: 0.693356837196925Epoch 46/800, Training Loss: 0.6960345228357268, Validation Loss: 0.6933567695800215Epoch 47/800, Training Loss: 0.696034434112643, Validation Loss: 0.6933566938733751Epoch 48/800, Training Loss: 0.6960343366467032, Validation Loss: 0.693356609041261Epoch 49/800, Training Loss: 0.6960342294149793, Validation Loss: 0.6933565138588695Epoch 50/800, Training Loss: 0.6960341112272961, Validation Loss: 0.6933564068715918Epoch 51/800, Training Loss: 0.6960339806923658, Validation Loss: 0.6933562863436411Epoch 52/800, Training Loss: 0.6960338361754226, Validation Loss: 0.6933561501927006Epoch 53/800, Training Loss: 0.6960336757448363, Validation Loss: 0.6933559959061377Epoch 54/800, Training Loss: 0.696033497104228, Validation Loss: 0.6933558204326332Epoch 55/800, Training Loss: 0.696033297505356, Validation Loss: 0.6933556200406856Epoch 56/800, Training Loss: 0.6960330736351861, Validation Loss: 0.693355390131986Epoch 57/800, Training Loss: 0.6960328214679005, Validation Loss: 0.6933551249925658Epoch 58/800, Training Loss: 0.6960325360686833, Validation Loss: 0.6933548174569985Epoch 59/800, Training Loss: 0.696032211330207, Validation Loss: 0.6933544584494069Epoch 60/800, Training Loss: 0.6960318396137822, Validation Loss: 0.6933540363472177Epoch 61/800, Training Loss: 0.6960314112531782, Validation Loss: 0.693353536085538Epoch 62/800, Training Loss: 0.6960309138569803, Validation Loss: 0.693352937874981Epoch 63/800, Training Loss: 0.6960303313093988, Validation Loss: 0.6933522153316007Epoch 64/800, Training Loss: 0.6960296423096239, Validation Loss: 0.6933513326924934Epoch 65/800, Training Loss: 0.6960288181873902, Validation Loss: 0.6933502405734593Epoch 66/800, Training Loss: 0.6960278195516463, Validation Loss: 0.69334886933617Epoch 67/800, Training Loss: 0.6960265909988939, Validation Loss: 0.6933471184105289Epoch 68/800, Training Loss: 0.6960250524802405, Validation Loss: 0.6933448385240953Epoch 69/800, Training Loss: 0.6960230846802513, Validation Loss: 0.693341800973126Epoch 70/800, Training Loss: 0.696020503158279, Validation Loss: 0.6933376420682316Epoch 71/800, Training Loss: 0.6960170102382707, Validation Loss: 0.6933317572954436Epoch 72/800, Training Loss: 0.6960120999510591, Validation Loss: 0.6933230866378198Epoch 73/800, Training Loss: 0.696004856066247, Validation Loss: 0.693309644577643Epoch 74/800, Training Loss: 0.695993482720361, Validation Loss: 0.6932873882597714Epoch 75/800, Training Loss: 0.6959740820397224, Validation Loss: 0.6932471363310818Epoch 76/800, Training Loss: 0.6959369542679632, Validation Loss: 0.6931646816128578Epoch 77/800, Training Loss: 0.6958527821645795, Validation Loss: 0.6929606760814382Epoch 78/800, Training Loss: 0.6956015140751763, Validation Loss: 0.6922652544542167Epoch 79/800, Training Loss: 0.6943173029306617, Validation Loss: 0.6876420981706207Epoch 80/800, Training Loss: 0.6654328565186345, Validation Loss: 0.5175891368152332Epoch 81/800, Training Loss: 0.41413055852039415, Validation Loss: 0.28268649983823Epoch 82/800, Training Loss: 0.3979719518136703, Validation Loss: 0.289856982071355Epoch 83/800, Training Loss: 0.4318948679967729, Validation Loss: 0.29792138528772655Epoch 84/800, Training Loss: 0.4569143053890244, Validation Loss: 0.30365124767492596Epoch 85/800, Training Loss: 0.4861444166783285, Validation Loss: 0.30347357502482347Epoch 86/800, Training Loss: 0.5268812437044611, Validation Loss: 0.3221024482937026Epoch 87/800, Training Loss: 0.7776679463427751, Validation Loss: 0.4245126410627697Epoch 88/800, Training Loss: 1.1548874134591691, Validation Loss: 0.5284510592249Epoch 89/800, Training Loss: nan, Validation Loss: nanEpoch 90/800, Training Loss: nan, Validation Loss: nanEpoch 91/800, Training Loss: nan, Validation Loss: nanEpoch 92/800, Training Loss: nan, Validation Loss: nanEpoch 93/800, Training Loss: nan, Validation Loss: nanEpoch 94/800, Training Loss: nan, Validation Loss: nanEpoch 95/800, Training Loss: nan, Validation Loss: nanEpoch 96/800, Training Loss: nan, Validation Loss: nanEpoch 97/800, Training Loss: nan, Validation Loss: nanEpoch 98/800, Training Loss: nan, Validation Loss: nanEpoch 99/800, Training Loss: nan, Validation Loss: nanEpoch 100/800, Training Loss: nan, Validation Loss: nanEpoch 101/800, Training Loss: nan, Validation Loss: nanEpoch 102/800, Training Loss: nan, Validation Loss: nanEpoch 103/800, Training Loss: nan, Validation Loss: nanEpoch 104/800, Training Loss: nan, Validation Loss: nanEpoch 105/800, Training Loss: nan, Validation Loss: nanEpoch 106/800, Training Loss: nan, Validation Loss: nanEpoch 107/800, Training Loss: nan, Validation Loss: nanEpoch 108/800, Training Loss: nan, Validation Loss: nanEpoch 109/800, Training Loss: nan, Validation Loss: nanEpoch 110/800, Training Loss: nan, Validation Loss: nanEpoch 111/800, Training Loss: nan, Validation Loss: nanEpoch 112/800, Training Loss: nan, Validation Loss: nanEpoch 113/800, Training Loss: nan, Validation Loss: nanEpoch 114/800, Training Loss: nan, Validation Loss: nanEpoch 115/800, Training Loss: nan, Validation Loss: nanEpoch 116/800, Training Loss: nan, Validation Loss: nanEpoch 117/800, Training Loss: nan, Validation Loss: nanEpoch 118/800, Training Loss: nan, Validation Loss: nanEpoch 119/800, Training Loss: nan, Validation Loss: nanEpoch 120/800, Training Loss: nan, Validation Loss: nanEpoch 121/800, Training Loss: nan, Validation Loss: nanEpoch 122/800, Training Loss: nan, Validation Loss: nanEpoch 123/800, Training Loss: nan, Validation Loss: nanEpoch 124/800, Training Loss: nan, Validation Loss: nanEpoch 125/800, Training Loss: nan, Validation Loss: nanEpoch 126/800, Training Loss: nan, Validation Loss: nanEpoch 127/800, Training Loss: nan, Validation Loss: nanEpoch 128/800, Training Loss: nan, Validation Loss: nanEpoch 129/800, Training Loss: nan, Validation Loss: nanEpoch 130/800, Training Loss: nan, Validation Loss: nanEpoch 131/800, Training Loss: nan, Validation Loss: nanEpoch 132/800, Training Loss: nan, Validation Loss: nanEpoch 133/800, Training Loss: nan, Validation Loss: nanEpoch 134/800, Training Loss: nan, Validation Loss: nanEpoch 135/800, Training Loss: nan, Validation Loss: nanEpoch 136/800, Training Loss: nan, Validation Loss: nanEpoch 137/800, Training Loss: nan, Validation Loss: nanEpoch 138/800, Training Loss: nan, Validation Loss: nanEpoch 139/800, Training Loss: nan, Validation Loss: nanEpoch 140/800, Training Loss: nan, Validation Loss: nanEpoch 141/800, Training Loss: nan, Validation Loss: nanEpoch 142/800, Training Loss: nan, Validation Loss: nanEpoch 143/800, Training Loss: nan, Validation Loss: nanEpoch 144/800, Training Loss: nan, Validation Loss: nanEpoch 145/800, Training Loss: nan, Validation Loss: nanEpoch 146/800, Training Loss: nan, Validation Loss: nanEpoch 147/800, Training Loss: nan, Validation Loss: nanEpoch 148/800, Training Loss: nan, Validation Loss: nanEpoch 149/800, Training Loss: nan, Validation Loss: nanEpoch 150/800, Training Loss: nan, Validation Loss: nanEpoch 151/800, Training Loss: nan, Validation Loss: nanEpoch 152/800, Training Loss: nan, Validation Loss: nanEpoch 153/800, Training Loss: nan, Validation Loss: nanEpoch 154/800, Training Loss: nan, Validation Loss: nanEpoch 155/800, Training Loss: nan, Validation Loss: nanEpoch 156/800, Training Loss: nan, Validation Loss: nanEpoch 157/800, Training Loss: nan, Validation Loss: nanEpoch 158/800, Training Loss: nan, Validation Loss: nanEpoch 159/800, Training Loss: nan, Validation Loss: nanEpoch 160/800, Training Loss: nan, Validation Loss: nanEpoch 161/800, Training Loss: nan, Validation Loss: nanEpoch 162/800, Training Loss: nan, Validation Loss: nanEpoch 163/800, Training Loss: nan, Validation Loss: nanEpoch 164/800, Training Loss: nan, Validation Loss: nanEpoch 165/800, Training Loss: nan, Validation Loss: nanEpoch 166/800, Training Loss: nan, Validation Loss: nanEpoch 167/800, Training Loss: nan, Validation Loss: nanEpoch 168/800, Training Loss: nan, Validation Loss: nanEpoch 169/800, Training Loss: nan, Validation Loss: nanEpoch 170/800, Training Loss: nan, Validation Loss: nanEpoch 171/800, Training Loss: nan, Validation Loss: nanEpoch 172/800, Training Loss: nan, Validation Loss: nanEpoch 173/800, Training Loss: nan, Validation Loss: nanEpoch 174/800, Training Loss: nan, Validation Loss: nanEpoch 175/800, Training Loss: nan, Validation Loss: nanEpoch 176/800, Training Loss: nan, Validation Loss: nanEpoch 177/800, Training Loss: nan, Validation Loss: nanEpoch 178/800, Training Loss: nan, Validation Loss: nanEpoch 179/800, Training Loss: nan, Validation Loss: nanEpoch 180/800, Training Loss: nan, Validation Loss: nanEpoch 181/800, Training Loss: nan, Validation Loss: nanEpoch 182/800, Training Loss: nan, Validation Loss: nanEpoch 183/800, Training Loss: nan, Validation Loss: nanEpoch 184/800, Training Loss: nan, Validation Loss: nanEpoch 185/800, Training Loss: nan, Validation Loss: nanEpoch 186/800, Training Loss: nan, Validation Loss: nanEpoch 187/800, Training Loss: nan, Validation Loss: nanEpoch 188/800, Training Loss: nan, Validation Loss: nanEpoch 189/800, Training Loss: nan, Validation Loss: nanEpoch 190/800, Training Loss: nan, Validation Loss: nanEpoch 191/800, Training Loss: nan, Validation Loss: nanEpoch 192/800, Training Loss: nan, Validation Loss: nanEpoch 193/800, Training Loss: nan, Validation Loss: nanEpoch 194/800, Training Loss: nan, Validation Loss: nanEpoch 195/800, Training Loss: nan, Validation Loss: nanEpoch 196/800, Training Loss: nan, Validation Loss: nanEpoch 197/800, Training Loss: nan, Validation Loss: nanEpoch 198/800, Training Loss: nan, Validation Loss: nanEpoch 199/800, Training Loss: nan, Validation Loss: nanEpoch 200/800, Training Loss: nan, Validation Loss: nanEpoch 201/800, Training Loss: nan, Validation Loss: nanEpoch 202/800, Training Loss: nan, Validation Loss: nanEpoch 203/800, Training Loss: nan, Validation Loss: nanEpoch 204/800, Training Loss: nan, Validation Loss: nanEpoch 205/800, Training Loss: nan, Validation Loss: nanEpoch 206/800, Training Loss: nan, Validation Loss: nanEpoch 207/800, Training Loss: nan, Validation Loss: nanEpoch 208/800, Training Loss: nan, Validation Loss: nanEpoch 209/800, Training Loss: nan, Validation Loss: nanEpoch 210/800, Training Loss: nan, Validation Loss: nanEpoch 211/800, Training Loss: nan, Validation Loss: nanEpoch 212/800, Training Loss: nan, Validation Loss: nanEpoch 213/800, Training Loss: nan, Validation Loss: nanEpoch 214/800, Training Loss: nan, Validation Loss: nanEpoch 215/800, Training Loss: nan, Validation Loss: nanEpoch 216/800, Training Loss: nan, Validation Loss: nanEpoch 217/800, Training Loss: nan, Validation Loss: nanEpoch 218/800, Training Loss: nan, Validation Loss: nanEpoch 219/800, Training Loss: nan, Validation Loss: nanEpoch 220/800, Training Loss: nan, Validation Loss: nanEpoch 221/800, Training Loss: nan, Validation Loss: nanEpoch 222/800, Training Loss: nan, Validation Loss: nanEpoch 223/800, Training Loss: nan, Validation Loss: nanEpoch 224/800, Training Loss: nan, Validation Loss: nanEpoch 225/800, Training Loss: nan, Validation Loss: nanEpoch 226/800, Training Loss: nan, Validation Loss: nanEpoch 227/800, Training Loss: nan, Validation Loss: nanEpoch 228/800, Training Loss: nan, Validation Loss: nanEpoch 229/800, Training Loss: nan, Validation Loss: nanEpoch 230/800, Training Loss: nan, Validation Loss: nanEpoch 231/800, Training Loss: nan, Validation Loss: nanEpoch 232/800, Training Loss: nan, Validation Loss: nanEpoch 233/800, Training Loss: nan, Validation Loss: nanEpoch 234/800, Training Loss: nan, Validation Loss: nanEpoch 235/800, Training Loss: nan, Validation Loss: nanEpoch 236/800, Training Loss: nan, Validation Loss: nanEpoch 237/800, Training Loss: nan, Validation Loss: nanEpoch 238/800, Training Loss: nan, Validation Loss: nanEpoch 239/800, Training Loss: nan, Validation Loss: nanEpoch 240/800, Training Loss: nan, Validation Loss: nanEpoch 241/800, Training Loss: nan, Validation Loss: nanEpoch 242/800, Training Loss: nan, Validation Loss: nanEpoch 243/800, Training Loss: nan, Validation Loss: nanEpoch 244/800, Training Loss: nan, Validation Loss: nanEpoch 245/800, Training Loss: nan, Validation Loss: nanEpoch 246/800, Training Loss: nan, Validation Loss: nanEpoch 247/800, Training Loss: nan, Validation Loss: nanEpoch 248/800, Training Loss: nan, Validation Loss: nanEpoch 249/800, Training Loss: nan, Validation Loss: nanEpoch 250/800, Training Loss: nan, Validation Loss: nanEpoch 251/800, Training Loss: nan, Validation Loss: nanEpoch 252/800, Training Loss: nan, Validation Loss: nanEpoch 253/800, Training Loss: nan, Validation Loss: nanEpoch 254/800, Training Loss: nan, Validation Loss: nanEpoch 255/800, Training Loss: nan, Validation Loss: nanEpoch 256/800, Training Loss: nan, Validation Loss: nanEpoch 257/800, Training Loss: nan, Validation Loss: nanEpoch 258/800, Training Loss: nan, Validation Loss: nanEpoch 259/800, Training Loss: nan, Validation Loss: nanEpoch 260/800, Training Loss: nan, Validation Loss: nanEpoch 261/800, Training Loss: nan, Validation Loss: nanEpoch 262/800, Training Loss: nan, Validation Loss: nanEpoch 263/800, Training Loss: nan, Validation Loss: nanEpoch 264/800, Training Loss: nan, Validation Loss: nanEpoch 265/800, Training Loss: nan, Validation Loss: nanEpoch 266/800, Training Loss: nan, Validation Loss: nanEpoch 267/800, Training Loss: nan, Validation Loss: nanEpoch 268/800, Training Loss: nan, Validation Loss: nanEpoch 269/800, Training Loss: nan, Validation Loss: nanEpoch 270/800, Training Loss: nan, Validation Loss: nanEpoch 271/800, Training Loss: nan, Validation Loss: nanEpoch 272/800, Training Loss: nan, Validation Loss: nanEpoch 273/800, Training Loss: nan, Validation Loss: nanEpoch 274/800, Training Loss: nan, Validation Loss: nanEpoch 275/800, Training Loss: nan, Validation Loss: nanEpoch 276/800, Training Loss: nan, Validation Loss: nanEpoch 277/800, Training Loss: nan, Validation Loss: nanEpoch 278/800, Training Loss: nan, Validation Loss: nanEpoch 279/800, Training Loss: nan, Validation Loss: nanEpoch 280/800, Training Loss: nan, Validation Loss: nanEpoch 281/800, Training Loss: nan, Validation Loss: nanEpoch 282/800, Training Loss: nan, Validation Loss: nanEpoch 283/800, Training Loss: nan, Validation Loss: nanEpoch 284/800, Training Loss: nan, Validation Loss: nanEpoch 285/800, Training Loss: nan, Validation Loss: nanEpoch 286/800, Training Loss: nan, Validation Loss: nanEpoch 287/800, Training Loss: nan, Validation Loss: nanEpoch 288/800, Training Loss: nan, Validation Loss: nanEpoch 289/800, Training Loss: nan, Validation Loss: nanEpoch 290/800, Training Loss: nan, Validation Loss: nanEpoch 291/800, Training Loss: nan, Validation Loss: nanEpoch 292/800, Training Loss: nan, Validation Loss: nanEpoch 293/800, Training Loss: nan, Validation Loss: nanEpoch 294/800, Training Loss: nan, Validation Loss: nanEpoch 295/800, Training Loss: nan, Validation Loss: nanEpoch 296/800, Training Loss: nan, Validation Loss: nanEpoch 297/800, Training Loss: nan, Validation Loss: nanEpoch 298/800, Training Loss: nan, Validation Loss: nanEpoch 299/800, Training Loss: nan, Validation Loss: nanEpoch 300/800, Training Loss: nan, Validation Loss: nanEpoch 301/800, Training Loss: nan, Validation Loss: nanEpoch 302/800, Training Loss: nan, Validation Loss: nanEpoch 303/800, Training Loss: nan, Validation Loss: nanEpoch 304/800, Training Loss: nan, Validation Loss: nanEpoch 305/800, Training Loss: nan, Validation Loss: nanEpoch 306/800, Training Loss: nan, Validation Loss: nanEpoch 307/800, Training Loss: nan, Validation Loss: nanEpoch 308/800, Training Loss: nan, Validation Loss: nanEpoch 309/800, Training Loss: nan, Validation Loss: nanEpoch 310/800, Training Loss: nan, Validation Loss: nanEpoch 311/800, Training Loss: nan, Validation Loss: nanEpoch 312/800, Training Loss: nan, Validation Loss: nanEpoch 313/800, Training Loss: nan, Validation Loss: nanEpoch 314/800, Training Loss: nan, Validation Loss: nanEpoch 315/800, Training Loss: nan, Validation Loss: nanEpoch 316/800, Training Loss: nan, Validation Loss: nanEpoch 317/800, Training Loss: nan, Validation Loss: nanEpoch 318/800, Training Loss: nan, Validation Loss: nanEpoch 319/800, Training Loss: nan, Validation Loss: nanEpoch 320/800, Training Loss: nan, Validation Loss: nanEpoch 321/800, Training Loss: nan, Validation Loss: nanEpoch 322/800, Training Loss: nan, Validation Loss: nanEpoch 323/800, Training Loss: nan, Validation Loss: nanEpoch 324/800, Training Loss: nan, Validation Loss: nanEpoch 325/800, Training Loss: nan, Validation Loss: nanEpoch 326/800, Training Loss: nan, Validation Loss: nanEpoch 327/800, Training Loss: nan, Validation Loss: nanEpoch 328/800, Training Loss: nan, Validation Loss: nanEpoch 329/800, Training Loss: nan, Validation Loss: nanEpoch 330/800, Training Loss: nan, Validation Loss: nanEpoch 331/800, Training Loss: nan, Validation Loss: nanEpoch 332/800, Training Loss: nan, Validation Loss: nanEpoch 333/800, Training Loss: nan, Validation Loss: nanEpoch 334/800, Training Loss: nan, Validation Loss: nanEpoch 335/800, Training Loss: nan, Validation Loss: nanEpoch 336/800, Training Loss: nan, Validation Loss: nanEpoch 337/800, Training Loss: nan, Validation Loss: nanEpoch 338/800, Training Loss: nan, Validation Loss: nanEpoch 339/800, Training Loss: nan, Validation Loss: nanEpoch 340/800, Training Loss: nan, Validation Loss: nanEpoch 341/800, Training Loss: nan, Validation Loss: nanEpoch 342/800, Training Loss: nan, Validation Loss: nanEpoch 343/800, Training Loss: nan, Validation Loss: nanEpoch 344/800, Training Loss: nan, Validation Loss: nanEpoch 345/800, Training Loss: nan, Validation Loss: nanEpoch 346/800, Training Loss: nan, Validation Loss: nanEpoch 347/800, Training Loss: nan, Validation Loss: nanEpoch 348/800, Training Loss: nan, Validation Loss: nanEpoch 349/800, Training Loss: nan, Validation Loss: nanEpoch 350/800, Training Loss: nan, Validation Loss: nanEpoch 351/800, Training Loss: nan, Validation Loss: nanEpoch 352/800, Training Loss: nan, Validation Loss: nanEpoch 353/800, Training Loss: nan, Validation Loss: nanEpoch 354/800, Training Loss: nan, Validation Loss: nanEpoch 355/800, Training Loss: nan, Validation Loss: nanEpoch 356/800, Training Loss: nan, Validation Loss: nanEpoch 357/800, Training Loss: nan, Validation Loss: nanEpoch 358/800, Training Loss: nan, Validation Loss: nanEpoch 359/800, Training Loss: nan, Validation Loss: nanEpoch 360/800, Training Loss: nan, Validation Loss: nanEpoch 361/800, Training Loss: nan, Validation Loss: nanEpoch 362/800, Training Loss: nan, Validation Loss: nanEpoch 363/800, Training Loss: nan, Validation Loss: nanEpoch 364/800, Training Loss: nan, Validation Loss: nanEpoch 365/800, Training Loss: nan, Validation Loss: nanEpoch 366/800, Training Loss: nan, Validation Loss: nanEpoch 367/800, Training Loss: nan, Validation Loss: nanEpoch 368/800, Training Loss: nan, Validation Loss: nanEpoch 369/800, Training Loss: nan, Validation Loss: nanEpoch 370/800, Training Loss: nan, Validation Loss: nanEpoch 371/800, Training Loss: nan, Validation Loss: nanEpoch 372/800, Training Loss: nan, Validation Loss: nanEpoch 373/800, Training Loss: nan, Validation Loss: nanEpoch 374/800, Training Loss: nan, Validation Loss: nanEpoch 375/800, Training Loss: nan, Validation Loss: nanEpoch 376/800, Training Loss: nan, Validation Loss: nanEpoch 377/800, Training Loss: nan, Validation Loss: nanEpoch 378/800, Training Loss: nan, Validation Loss: nanEpoch 379/800, Training Loss: nan, Validation Loss: nanEpoch 380/800, Training Loss: nan, Validation Loss: nanEpoch 381/800, Training Loss: nan, Validation Loss: nanEpoch 382/800, Training Loss: nan, Validation Loss: nanEpoch 383/800, Training Loss: nan, Validation Loss: nanEpoch 384/800, Training Loss: nan, Validation Loss: nanEpoch 385/800, Training Loss: nan, Validation Loss: nanEpoch 386/800, Training Loss: nan, Validation Loss: nanEpoch 387/800, Training Loss: nan, Validation Loss: nanEpoch 388/800, Training Loss: nan, Validation Loss: nanEpoch 389/800, Training Loss: nan, Validation Loss: nanEpoch 390/800, Training Loss: nan, Validation Loss: nanEpoch 391/800, Training Loss: nan, Validation Loss: nanEpoch 392/800, Training Loss: nan, Validation Loss: nanEpoch 393/800, Training Loss: nan, Validation Loss: nanEpoch 394/800, Training Loss: nan, Validation Loss: nanEpoch 395/800, Training Loss: nan, Validation Loss: nanEpoch 396/800, Training Loss: nan, Validation Loss: nanEpoch 397/800, Training Loss: nan, Validation Loss: nanEpoch 398/800, Training Loss: nan, Validation Loss: nanEpoch 399/800, Training Loss: nan, Validation Loss: nanEpoch 400/800, Training Loss: nan, Validation Loss: nanEpoch 401/800, Training Loss: nan, Validation Loss: nanEpoch 402/800, Training Loss: nan, Validation Loss: nanEpoch 403/800, Training Loss: nan, Validation Loss: nanEpoch 404/800, Training Loss: nan, Validation Loss: nanEpoch 405/800, Training Loss: nan, Validation Loss: nanEpoch 406/800, Training Loss: nan, Validation Loss: nanEpoch 407/800, Training Loss: nan, Validation Loss: nanEpoch 408/800, Training Loss: nan, Validation Loss: nanEpoch 409/800, Training Loss: nan, Validation Loss: nanEpoch 410/800, Training Loss: nan, Validation Loss: nanEpoch 411/800, Training Loss: nan, Validation Loss: nanEpoch 412/800, Training Loss: nan, Validation Loss: nanEpoch 413/800, Training Loss: nan, Validation Loss: nanEpoch 414/800, Training Loss: nan, Validation Loss: nanEpoch 415/800, Training Loss: nan, Validation Loss: nanEpoch 416/800, Training Loss: nan, Validation Loss: nanEpoch 417/800, Training Loss: nan, Validation Loss: nanEpoch 418/800, Training Loss: nan, Validation Loss: nanEpoch 419/800, Training Loss: nan, Validation Loss: nanEpoch 420/800, Training Loss: nan, Validation Loss: nanEpoch 421/800, Training Loss: nan, Validation Loss: nanEpoch 422/800, Training Loss: nan, Validation Loss: nanEpoch 423/800, Training Loss: nan, Validation Loss: nanEpoch 424/800, Training Loss: nan, Validation Loss: nanEpoch 425/800, Training Loss: nan, Validation Loss: nanEpoch 426/800, Training Loss: nan, Validation Loss: nanEpoch 427/800, Training Loss: nan, Validation Loss: nanEpoch 428/800, Training Loss: nan, Validation Loss: nanEpoch 429/800, Training Loss: nan, Validation Loss: nanEpoch 430/800, Training Loss: nan, Validation Loss: nanEpoch 431/800, Training Loss: nan, Validation Loss: nanEpoch 432/800, Training Loss: nan, Validation Loss: nanEpoch 433/800, Training Loss: nan, Validation Loss: nanEpoch 434/800, Training Loss: nan, Validation Loss: nanEpoch 435/800, Training Loss: nan, Validation Loss: nanEpoch 436/800, Training Loss: nan, Validation Loss: nanEpoch 437/800, Training Loss: nan, Validation Loss: nanEpoch 438/800, Training Loss: nan, Validation Loss: nanEpoch 439/800, Training Loss: nan, Validation Loss: nanEpoch 440/800, Training Loss: nan, Validation Loss: nanEpoch 441/800, Training Loss: nan, Validation Loss: nanEpoch 442/800, Training Loss: nan, Validation Loss: nanEpoch 443/800, Training Loss: nan, Validation Loss: nanEpoch 444/800, Training Loss: nan, Validation Loss: nanEpoch 445/800, Training Loss: nan, Validation Loss: nanEpoch 446/800, Training Loss: nan, Validation Loss: nanEpoch 447/800, Training Loss: nan, Validation Loss: nanEpoch 448/800, Training Loss: nan, Validation Loss: nanEpoch 449/800, Training Loss: nan, Validation Loss: nanEpoch 450/800, Training Loss: nan, Validation Loss: nanEpoch 451/800, Training Loss: nan, Validation Loss: nanEpoch 452/800, Training Loss: nan, Validation Loss: nanEpoch 453/800, Training Loss: nan, Validation Loss: nanEpoch 454/800, Training Loss: nan, Validation Loss: nanEpoch 455/800, Training Loss: nan, Validation Loss: nanEpoch 456/800, Training Loss: nan, Validation Loss: nanEpoch 457/800, Training Loss: nan, Validation Loss: nanEpoch 458/800, Training Loss: nan, Validation Loss: nanEpoch 459/800, Training Loss: nan, Validation Loss: nanEpoch 460/800, Training Loss: nan, Validation Loss: nanEpoch 461/800, Training Loss: nan, Validation Loss: nanEpoch 462/800, Training Loss: nan, Validation Loss: nanEpoch 463/800, Training Loss: nan, Validation Loss: nanEpoch 464/800, Training Loss: nan, Validation Loss: nanEpoch 465/800, Training Loss: nan, Validation Loss: nanEpoch 466/800, Training Loss: nan, Validation Loss: nanEpoch 467/800, Training Loss: nan, Validation Loss: nanEpoch 468/800, Training Loss: nan, Validation Loss: nanEpoch 469/800, Training Loss: nan, Validation Loss: nanEpoch 470/800, Training Loss: nan, Validation Loss: nanEpoch 471/800, Training Loss: nan, Validation Loss: nanEpoch 472/800, Training Loss: nan, Validation Loss: nanEpoch 473/800, Training Loss: nan, Validation Loss: nanEpoch 474/800, Training Loss: nan, Validation Loss: nanEpoch 475/800, Training Loss: nan, Validation Loss: nanEpoch 476/800, Training Loss: nan, Validation Loss: nanEpoch 477/800, Training Loss: nan, Validation Loss: nanEpoch 478/800, Training Loss: nan, Validation Loss: nanEpoch 479/800, Training Loss: nan, Validation Loss: nanEpoch 480/800, Training Loss: nan, Validation Loss: nanEpoch 481/800, Training Loss: nan, Validation Loss: nanEpoch 482/800, Training Loss: nan, Validation Loss: nanEpoch 483/800, Training Loss: nan, Validation Loss: nanEpoch 484/800, Training Loss: nan, Validation Loss: nanEpoch 485/800, Training Loss: nan, Validation Loss: nanEpoch 486/800, Training Loss: nan, Validation Loss: nanEpoch 487/800, Training Loss: nan, Validation Loss: nanEpoch 488/800, Training Loss: nan, Validation Loss: nanEpoch 489/800, Training Loss: nan, Validation Loss: nanEpoch 490/800, Training Loss: nan, Validation Loss: nanEpoch 491/800, Training Loss: nan, Validation Loss: nanEpoch 492/800, Training Loss: nan, Validation Loss: nanEpoch 493/800, Training Loss: nan, Validation Loss: nanEpoch 494/800, Training Loss: nan, Validation Loss: nanEpoch 495/800, Training Loss: nan, Validation Loss: nanEpoch 496/800, Training Loss: nan, Validation Loss: nanEpoch 497/800, Training Loss: nan, Validation Loss: nanEpoch 498/800, Training Loss: nan, Validation Loss: nanEpoch 499/800, Training Loss: nan, Validation Loss: nanEpoch 500/800, Training Loss: nan, Validation Loss: nanEpoch 501/800, Training Loss: nan, Validation Loss: nanEpoch 502/800, Training Loss: nan, Validation Loss: nanEpoch 503/800, Training Loss: nan, Validation Loss: nanEpoch 504/800, Training Loss: nan, Validation Loss: nanEpoch 505/800, Training Loss: nan, Validation Loss: nanEpoch 506/800, Training Loss: nan, Validation Loss: nanEpoch 507/800, Training Loss: nan, Validation Loss: nanEpoch 508/800, Training Loss: nan, Validation Loss: nanEpoch 509/800, Training Loss: nan, Validation Loss: nanEpoch 510/800, Training Loss: nan, Validation Loss: nanEpoch 511/800, Training Loss: nan, Validation Loss: nanEpoch 512/800, Training Loss: nan, Validation Loss: nanEpoch 513/800, Training Loss: nan, Validation Loss: nanEpoch 514/800, Training Loss: nan, Validation Loss: nanEpoch 515/800, Training Loss: nan, Validation Loss: nanEpoch 516/800, Training Loss: nan, Validation Loss: nanEpoch 517/800, Training Loss: nan, Validation Loss: nanEpoch 518/800, Training Loss: nan, Validation Loss: nanEpoch 519/800, Training Loss: nan, Validation Loss: nanEpoch 520/800, Training Loss: nan, Validation Loss: nanEpoch 521/800, Training Loss: nan, Validation Loss: nanEpoch 522/800, Training Loss: nan, Validation Loss: nanEpoch 523/800, Training Loss: nan, Validation Loss: nanEpoch 524/800, Training Loss: nan, Validation Loss: nanEpoch 525/800, Training Loss: nan, Validation Loss: nanEpoch 526/800, Training Loss: nan, Validation Loss: nanEpoch 527/800, Training Loss: nan, Validation Loss: nanEpoch 528/800, Training Loss: nan, Validation Loss: nanEpoch 529/800, Training Loss: nan, Validation Loss: nanEpoch 530/800, Training Loss: nan, Validation Loss: nanEpoch 531/800, Training Loss: nan, Validation Loss: nanEpoch 532/800, Training Loss: nan, Validation Loss: nanEpoch 533/800, Training Loss: nan, Validation Loss: nanEpoch 534/800, Training Loss: nan, Validation Loss: nanEpoch 535/800, Training Loss: nan, Validation Loss: nanEpoch 536/800, Training Loss: nan, Validation Loss: nanEpoch 537/800, Training Loss: nan, Validation Loss: nanEpoch 538/800, Training Loss: nan, Validation Loss: nanEpoch 539/800, Training Loss: nan, Validation Loss: nanEpoch 540/800, Training Loss: nan, Validation Loss: nanEpoch 541/800, Training Loss: nan, Validation Loss: nanEpoch 542/800, Training Loss: nan, Validation Loss: nanEpoch 543/800, Training Loss: nan, Validation Loss: nanEpoch 544/800, Training Loss: nan, Validation Loss: nanEpoch 545/800, Training Loss: nan, Validation Loss: nanEpoch 546/800, Training Loss: nan, Validation Loss: nanEpoch 547/800, Training Loss: nan, Validation Loss: nanEpoch 548/800, Training Loss: nan, Validation Loss: nanEpoch 549/800, Training Loss: nan, Validation Loss: nanEpoch 550/800, Training Loss: nan, Validation Loss: nanEpoch 551/800, Training Loss: nan, Validation Loss: nanEpoch 552/800, Training Loss: nan, Validation Loss: nanEpoch 553/800, Training Loss: nan, Validation Loss: nanEpoch 554/800, Training Loss: nan, Validation Loss: nanEpoch 555/800, Training Loss: nan, Validation Loss: nanEpoch 556/800, Training Loss: nan, Validation Loss: nanEpoch 557/800, Training Loss: nan, Validation Loss: nanEpoch 558/800, Training Loss: nan, Validation Loss: nanEpoch 559/800, Training Loss: nan, Validation Loss: nanEpoch 560/800, Training Loss: nan, Validation Loss: nanEpoch 561/800, Training Loss: nan, Validation Loss: nanEpoch 562/800, Training Loss: nan, Validation Loss: nanEpoch 563/800, Training Loss: nan, Validation Loss: nanEpoch 564/800, Training Loss: nan, Validation Loss: nanEpoch 565/800, Training Loss: nan, Validation Loss: nanEpoch 566/800, Training Loss: nan, Validation Loss: nanEpoch 567/800, Training Loss: nan, Validation Loss: nanEpoch 568/800, Training Loss: nan, Validation Loss: nanEpoch 569/800, Training Loss: nan, Validation Loss: nanEpoch 570/800, Training Loss: nan, Validation Loss: nanEpoch 571/800, Training Loss: nan, Validation Loss: nanEpoch 572/800, Training Loss: nan, Validation Loss: nanEpoch 573/800, Training Loss: nan, Validation Loss: nanEpoch 574/800, Training Loss: nan, Validation Loss: nanEpoch 575/800, Training Loss: nan, Validation Loss: nanEpoch 576/800, Training Loss: nan, Validation Loss: nanEpoch 577/800, Training Loss: nan, Validation Loss: nanEpoch 578/800, Training Loss: nan, Validation Loss: nanEpoch 579/800, Training Loss: nan, Validation Loss: nanEpoch 580/800, Training Loss: nan, Validation Loss: nanEpoch 581/800, Training Loss: nan, Validation Loss: nanEpoch 582/800, Training Loss: nan, Validation Loss: nanEpoch 583/800, Training Loss: nan, Validation Loss: nanEpoch 584/800, Training Loss: nan, Validation Loss: nanEpoch 585/800, Training Loss: nan, Validation Loss: nanEpoch 586/800, Training Loss: nan, Validation Loss: nanEpoch 587/800, Training Loss: nan, Validation Loss: nanEpoch 588/800, Training Loss: nan, Validation Loss: nanEpoch 589/800, Training Loss: nan, Validation Loss: nanEpoch 590/800, Training Loss: nan, Validation Loss: nanEpoch 591/800, Training Loss: nan, Validation Loss: nanEpoch 592/800, Training Loss: nan, Validation Loss: nanEpoch 593/800, Training Loss: nan, Validation Loss: nanEpoch 594/800, Training Loss: nan, Validation Loss: nanEpoch 595/800, Training Loss: nan, Validation Loss: nanEpoch 596/800, Training Loss: nan, Validation Loss: nanEpoch 597/800, Training Loss: nan, Validation Loss: nanEpoch 598/800, Training Loss: nan, Validation Loss: nanEpoch 599/800, Training Loss: nan, Validation Loss: nanEpoch 600/800, Training Loss: nan, Validation Loss: nanEpoch 601/800, Training Loss: nan, Validation Loss: nanEpoch 602/800, Training Loss: nan, Validation Loss: nanEpoch 603/800, Training Loss: nan, Validation Loss: nanEpoch 604/800, Training Loss: nan, Validation Loss: nanEpoch 605/800, Training Loss: nan, Validation Loss: nanEpoch 606/800, Training Loss: nan, Validation Loss: nanEpoch 607/800, Training Loss: nan, Validation Loss: nanEpoch 608/800, Training Loss: nan, Validation Loss: nanEpoch 609/800, Training Loss: nan, Validation Loss: nanEpoch 610/800, Training Loss: nan, Validation Loss: nanEpoch 611/800, Training Loss: nan, Validation Loss: nanEpoch 612/800, Training Loss: nan, Validation Loss: nanEpoch 613/800, Training Loss: nan, Validation Loss: nanEpoch 614/800, Training Loss: nan, Validation Loss: nanEpoch 615/800, Training Loss: nan, Validation Loss: nanEpoch 616/800, Training Loss: nan, Validation Loss: nanEpoch 617/800, Training Loss: nan, Validation Loss: nanEpoch 618/800, Training Loss: nan, Validation Loss: nanEpoch 619/800, Training Loss: nan, Validation Loss: nanEpoch 620/800, Training Loss: nan, Validation Loss: nanEpoch 621/800, Training Loss: nan, Validation Loss: nanEpoch 622/800, Training Loss: nan, Validation Loss: nanEpoch 623/800, Training Loss: nan, Validation Loss: nanEpoch 624/800, Training Loss: nan, Validation Loss: nanEpoch 625/800, Training Loss: nan, Validation Loss: nanEpoch 626/800, Training Loss: nan, Validation Loss: nanEpoch 627/800, Training Loss: nan, Validation Loss: nanEpoch 628/800, Training Loss: nan, Validation Loss: nanEpoch 629/800, Training Loss: nan, Validation Loss: nanEpoch 630/800, Training Loss: nan, Validation Loss: nanEpoch 631/800, Training Loss: nan, Validation Loss: nanEpoch 632/800, Training Loss: nan, Validation Loss: nanEpoch 633/800, Training Loss: nan, Validation Loss: nanEpoch 634/800, Training Loss: nan, Validation Loss: nanEpoch 635/800, Training Loss: nan, Validation Loss: nanEpoch 636/800, Training Loss: nan, Validation Loss: nanEpoch 637/800, Training Loss: nan, Validation Loss: nanEpoch 638/800, Training Loss: nan, Validation Loss: nanEpoch 639/800, Training Loss: nan, Validation Loss: nanEpoch 640/800, Training Loss: nan, Validation Loss: nanEpoch 641/800, Training Loss: nan, Validation Loss: nanEpoch 642/800, Training Loss: nan, Validation Loss: nanEpoch 643/800, Training Loss: nan, Validation Loss: nanEpoch 644/800, Training Loss: nan, Validation Loss: nanEpoch 645/800, Training Loss: nan, Validation Loss: nanEpoch 646/800, Training Loss: nan, Validation Loss: nanEpoch 647/800, Training Loss: nan, Validation Loss: nanEpoch 648/800, Training Loss: nan, Validation Loss: nanEpoch 649/800, Training Loss: nan, Validation Loss: nanEpoch 650/800, Training Loss: nan, Validation Loss: nanEpoch 651/800, Training Loss: nan, Validation Loss: nanEpoch 652/800, Training Loss: nan, Validation Loss: nanEpoch 653/800, Training Loss: nan, Validation Loss: nanEpoch 654/800, Training Loss: nan, Validation Loss: nanEpoch 655/800, Training Loss: nan, Validation Loss: nanEpoch 656/800, Training Loss: nan, Validation Loss: nanEpoch 657/800, Training Loss: nan, Validation Loss: nanEpoch 658/800, Training Loss: nan, Validation Loss: nanEpoch 659/800, Training Loss: nan, Validation Loss: nanEpoch 660/800, Training Loss: nan, Validation Loss: nanEpoch 661/800, Training Loss: nan, Validation Loss: nanEpoch 662/800, Training Loss: nan, Validation Loss: nanEpoch 663/800, Training Loss: nan, Validation Loss: nanEpoch 664/800, Training Loss: nan, Validation Loss: nanEpoch 665/800, Training Loss: nan, Validation Loss: nanEpoch 666/800, Training Loss: nan, Validation Loss: nanEpoch 667/800, Training Loss: nan, Validation Loss: nanEpoch 668/800, Training Loss: nan, Validation Loss: nanEpoch 669/800, Training Loss: nan, Validation Loss: nanEpoch 670/800, Training Loss: nan, Validation Loss: nanEpoch 671/800, Training Loss: nan, Validation Loss: nanEpoch 672/800, Training Loss: nan, Validation Loss: nanEpoch 673/800, Training Loss: nan, Validation Loss: nanEpoch 674/800, Training Loss: nan, Validation Loss: nanEpoch 675/800, Training Loss: nan, Validation Loss: nanEpoch 676/800, Training Loss: nan, Validation Loss: nanEpoch 677/800, Training Loss: nan, Validation Loss: nanEpoch 678/800, Training Loss: nan, Validation Loss: nanEpoch 679/800, Training Loss: nan, Validation Loss: nanEpoch 680/800, Training Loss: nan, Validation Loss: nanEpoch 681/800, Training Loss: nan, Validation Loss: nanEpoch 682/800, Training Loss: nan, Validation Loss: nanEpoch 683/800, Training Loss: nan, Validation Loss: nanEpoch 684/800, Training Loss: nan, Validation Loss: nanEpoch 685/800, Training Loss: nan, Validation Loss: nanEpoch 686/800, Training Loss: nan, Validation Loss: nanEpoch 687/800, Training Loss: nan, Validation Loss: nanEpoch 688/800, Training Loss: nan, Validation Loss: nanEpoch 689/800, Training Loss: nan, Validation Loss: nanEpoch 690/800, Training Loss: nan, Validation Loss: nanEpoch 691/800, Training Loss: nan, Validation Loss: nanEpoch 692/800, Training Loss: nan, Validation Loss: nanEpoch 693/800, Training Loss: nan, Validation Loss: nanEpoch 694/800, Training Loss: nan, Validation Loss: nanEpoch 695/800, Training Loss: nan, Validation Loss: nanEpoch 696/800, Training Loss: nan, Validation Loss: nanEpoch 697/800, Training Loss: nan, Validation Loss: nanEpoch 698/800, Training Loss: nan, Validation Loss: nanEpoch 699/800, Training Loss: nan, Validation Loss: nanEpoch 700/800, Training Loss: nan, Validation Loss: nanEpoch 701/800, Training Loss: nan, Validation Loss: nanEpoch 702/800, Training Loss: nan, Validation Loss: nanEpoch 703/800, Training Loss: nan, Validation Loss: nanEpoch 704/800, Training Loss: nan, Validation Loss: nanEpoch 705/800, Training Loss: nan, Validation Loss: nanEpoch 706/800, Training Loss: nan, Validation Loss: nanEpoch 707/800, Training Loss: nan, Validation Loss: nanEpoch 708/800, Training Loss: nan, Validation Loss: nanEpoch 709/800, Training Loss: nan, Validation Loss: nanEpoch 710/800, Training Loss: nan, Validation Loss: nanEpoch 711/800, Training Loss: nan, Validation Loss: nanEpoch 712/800, Training Loss: nan, Validation Loss: nanEpoch 713/800, Training Loss: nan, Validation Loss: nanEpoch 714/800, Training Loss: nan, Validation Loss: nanEpoch 715/800, Training Loss: nan, Validation Loss: nanEpoch 716/800, Training Loss: nan, Validation Loss: nanEpoch 717/800, Training Loss: nan, Validation Loss: nanEpoch 718/800, Training Loss: nan, Validation Loss: nanEpoch 719/800, Training Loss: nan, Validation Loss: nanEpoch 720/800, Training Loss: nan, Validation Loss: nanEpoch 721/800, Training Loss: nan, Validation Loss: nanEpoch 722/800, Training Loss: nan, Validation Loss: nanEpoch 723/800, Training Loss: nan, Validation Loss: nanEpoch 724/800, Training Loss: nan, Validation Loss: nanEpoch 725/800, Training Loss: nan, Validation Loss: nanEpoch 726/800, Training Loss: nan, Validation Loss: nanEpoch 727/800, Training Loss: nan, Validation Loss: nanEpoch 728/800, Training Loss: nan, Validation Loss: nanEpoch 729/800, Training Loss: nan, Validation Loss: nanEpoch 730/800, Training Loss: nan, Validation Loss: nanEpoch 731/800, Training Loss: nan, Validation Loss: nanEpoch 732/800, Training Loss: nan, Validation Loss: nanEpoch 733/800, Training Loss: nan, Validation Loss: nanEpoch 734/800, Training Loss: nan, Validation Loss: nanEpoch 735/800, Training Loss: nan, Validation Loss: nanEpoch 736/800, Training Loss: nan, Validation Loss: nanEpoch 737/800, Training Loss: nan, Validation Loss: nanEpoch 738/800, Training Loss: nan, Validation Loss: nanEpoch 739/800, Training Loss: nan, Validation Loss: nanEpoch 740/800, Training Loss: nan, Validation Loss: nanEpoch 741/800, Training Loss: nan, Validation Loss: nanEpoch 742/800, Training Loss: nan, Validation Loss: nanEpoch 743/800, Training Loss: nan, Validation Loss: nanEpoch 744/800, Training Loss: nan, Validation Loss: nanEpoch 745/800, Training Loss: nan, Validation Loss: nanEpoch 746/800, Training Loss: nan, Validation Loss: nanEpoch 747/800, Training Loss: nan, Validation Loss: nanEpoch 748/800, Training Loss: nan, Validation Loss: nanEpoch 749/800, Training Loss: nan, Validation Loss: nanEpoch 750/800, Training Loss: nan, Validation Loss: nanEpoch 751/800, Training Loss: nan, Validation Loss: nanEpoch 752/800, Training Loss: nan, Validation Loss: nanEpoch 753/800, Training Loss: nan, Validation Loss: nanEpoch 754/800, Training Loss: nan, Validation Loss: nanEpoch 755/800, Training Loss: nan, Validation Loss: nanEpoch 756/800, Training Loss: nan, Validation Loss: nanEpoch 757/800, Training Loss: nan, Validation Loss: nanEpoch 758/800, Training Loss: nan, Validation Loss: nanEpoch 759/800, Training Loss: nan, Validation Loss: nanEpoch 760/800, Training Loss: nan, Validation Loss: nanEpoch 761/800, Training Loss: nan, Validation Loss: nanEpoch 762/800, Training Loss: nan, Validation Loss: nanEpoch 763/800, Training Loss: nan, Validation Loss: nanEpoch 764/800, Training Loss: nan, Validation Loss: nanEpoch 765/800, Training Loss: nan, Validation Loss: nanEpoch 766/800, Training Loss: nan, Validation Loss: nanEpoch 767/800, Training Loss: nan, Validation Loss: nanEpoch 768/800, Training Loss: nan, Validation Loss: nanEpoch 769/800, Training Loss: nan, Validation Loss: nanEpoch 770/800, Training Loss: nan, Validation Loss: nanEpoch 771/800, Training Loss: nan, Validation Loss: nanEpoch 772/800, Training Loss: nan, Validation Loss: nanEpoch 773/800, Training Loss: nan, Validation Loss: nanEpoch 774/800, Training Loss: nan, Validation Loss: nanEpoch 775/800, Training Loss: nan, Validation Loss: nanEpoch 776/800, Training Loss: nan, Validation Loss: nanEpoch 777/800, Training Loss: nan, Validation Loss: nanEpoch 778/800, Training Loss: nan, Validation Loss: nanEpoch 779/800, Training Loss: nan, Validation Loss: nanEpoch 780/800, Training Loss: nan, Validation Loss: nanEpoch 781/800, Training Loss: nan, Validation Loss: nanEpoch 782/800, Training Loss: nan, Validation Loss: nanEpoch 783/800, Training Loss: nan, Validation Loss: nanEpoch 784/800, Training Loss: nan, Validation Loss: nanEpoch 785/800, Training Loss: nan, Validation Loss: nanEpoch 786/800, Training Loss: nan, Validation Loss: nanEpoch 787/800, Training Loss: nan, Validation Loss: nanEpoch 788/800, Training Loss: nan, Validation Loss: nanEpoch 789/800, Training Loss: nan, Validation Loss: nanEpoch 790/800, Training Loss: nan, Validation Loss: nanEpoch 791/800, Training Loss: nan, Validation Loss: nanEpoch 792/800, Training Loss: nan, Validation Loss: nanEpoch 793/800, Training Loss: nan, Validation Loss: nanEpoch 794/800, Training Loss: nan, Validation Loss: nanEpoch 795/800, Training Loss: nan, Validation Loss: nanEpoch 796/800, Training Loss: nan, Validation Loss: nanEpoch 797/800, Training Loss: nan, Validation Loss: nanEpoch 798/800, Training Loss: nan, Validation Loss: nanEpoch 799/800, Training Loss: nan, Validation Loss: nanEpoch 800/800, Training Loss: nan, Validation Loss: nanEpoch 1/500, Training Loss: 0.6937591521222543, Validation Loss: 0.6932226115644045Epoch 2/500, Training Loss: 0.6934112409765176, Validation Loss: 0.6930375585150397Epoch 3/500, Training Loss: 0.6931808685735976, Validation Loss: 0.692930326923975Epoch 4/500, Training Loss: 0.6930287589172314, Validation Loss: 0.6928721926099148Epoch 5/500, Training Loss: 0.6929286739255058, Validation Loss: 0.6928444618128144Epoch 6/500, Training Loss: 0.6928631089631975, Validation Loss: 0.6928350649070235Epoch 7/500, Training Loss: 0.6928203995426221, Validation Loss: 0.6928362873848427Epoch 8/500, Training Loss: 0.6927927820857758, Validation Loss: 0.6928432639314591Epoch 9/500, Training Loss: 0.6927750963638851, Validation Loss: 0.6928529821695613Epoch 10/500, Training Loss: 0.6927639181888097, Validation Loss: 0.6928636257220646Epoch 11/500, Training Loss: 0.6927569800817776, Validation Loss: 0.69287414261619Epoch 12/500, Training Loss: 0.6927527844995743, Validation Loss: 0.6928839630001181Epoch 13/500, Training Loss: 0.6927503457432721, Validation Loss: 0.6928928155624237Epoch 14/500, Training Loss: 0.6927490178341247, Validation Loss: 0.6929006090203582Epoch 15/500, Training Loss: 0.6927483798054334, Validation Loss: 0.6929073563591248Epoch 16/500, Training Loss: 0.6927481593307824, Validation Loss: 0.6929131270378927Epoch 17/500, Training Loss: 0.6927481819397807, Validation Loss: 0.6929180173887508Epoch 18/500, Training Loss: 0.6927483373032628, Validation Loss: 0.6929221327636889Epoch 19/500, Training Loss: 0.6927485568973772, Validation Loss: 0.6929255771938261Epoch 20/500, Training Loss: 0.6927487992457828, Validation Loss: 0.6929284477889746Epoch 21/500, Training Loss: 0.6927490402022998, Validation Loss: 0.6929308320737231Epoch 22/500, Training Loss: 0.6927492665805968, Validation Loss: 0.6929328070948747Epoch 23/500, Training Loss: 0.6927494720016987, Validation Loss: 0.6929344395549732Epoch 24/500, Training Loss: 0.6927496542071273, Validation Loss: 0.6929357865015291Epoch 25/500, Training Loss: 0.6927498133371967, Validation Loss: 0.6929368962805073Epoch 26/500, Training Loss: 0.692749950842139, Validation Loss: 0.6929378095782345Epoch 27/500, Training Loss: 0.692750068805747, Validation Loss: 0.6929385604498826Epoch 28/500, Training Loss: 0.6927501695359691, Validation Loss: 0.6929391772793478Epoch 29/500, Training Loss: 0.692750255326581, Validation Loss: 0.6929396836443044Epoch 30/500, Training Loss: 0.6927503283270372, Validation Loss: 0.6929400990776651Epoch 31/500, Training Loss: 0.6927503904795432, Validation Loss: 0.6929404397267908Epoch 32/500, Training Loss: 0.6927504434967996, Validation Loss: 0.6929407189172743Epoch 33/500, Training Loss: 0.6927504888634322, Validation Loss: 0.6929409476307339Epoch 34/500, Training Loss: 0.6927505278503473, Validation Loss: 0.6929411349069056Epoch 35/500, Training Loss: 0.692750561535353, Validation Loss: 0.6929412881802077Epoch 36/500, Training Loss: 0.6927505908259726, Validation Loss: 0.6929414135602588Epoch 37/500, Training Loss: 0.6927506164821252, Validation Loss: 0.6929415160648998Epoch 38/500, Training Loss: 0.6927506391373746, Validation Loss: 0.6929415998132435Epoch 39/500, Training Loss: 0.6927506593181285, Validation Loss: 0.6929416681852655Epoch 40/500, Training Loss: 0.6927506774605695, Validation Loss: 0.6929417239535115Epoch 41/500, Training Loss: 0.6927506939253274, Validation Loss: 0.6929417693916361Epoch 42/500, Training Loss: 0.6927507090100448, Validation Loss: 0.6929418063637518Epoch 43/500, Training Loss: 0.6927507229600295, Validation Loss: 0.6929418363979141Epoch 44/500, Training Loss: 0.6927507359772246, Validation Loss: 0.6929418607465079Epoch 45/500, Training Loss: 0.6927507482277193, Validation Loss: 0.6929418804358524Epoch 46/500, Training Loss: 0.6927507598480136, Validation Loss: 0.6929418963069116Epoch 47/500, Training Loss: 0.6927507709502191, Validation Loss: 0.692941909048711Epoch 48/500, Training Loss: 0.692750781626373, Validation Loss: 0.6929419192257424Epoch 49/500, Training Loss: 0.6927507919519993, Validation Loss: 0.6929419273004411Epoch 50/500, Training Loss: 0.6927508019890544, Validation Loss: 0.6929419336516139Epoch 51/500, Training Loss: 0.6927508117883504, Validation Loss: 0.6929419385895473Epoch 52/500, Training Loss: 0.6927508213915499, Validation Loss: 0.692941942368387Epoch 53/500, Training Loss: 0.6927508308328084, Validation Loss: 0.6929419451962865Epoch 54/500, Training Loss: 0.6927508401401246, Validation Loss: 0.6929419472437225Epoch 55/500, Training Loss: 0.6927508493364481, Validation Loss: 0.6929419486503053Epoch 56/500, Training Loss: 0.692750858440587, Validation Loss: 0.6929419495303645Epoch 57/500, Training Loss: 0.6927508674679572, Validation Loss: 0.6929419499775185Epoch 58/500, Training Loss: 0.6927508764311912, Validation Loss: 0.6929419500684226Epoch 59/500, Training Loss: 0.6927508853406439, Validation Loss: 0.6929419498658376Epoch 60/500, Training Loss: 0.6927508942048022, Validation Loss: 0.6929419494211435Epoch 61/500, Training Loss: 0.6927509030306235, Validation Loss: 0.6929419487763986Epoch 62/500, Training Loss: 0.6927509118238075, Validation Loss: 0.6929419479660273Epoch 63/500, Training Loss: 0.6927509205890323, Validation Loss: 0.6929419470181989Epoch 64/500, Training Loss: 0.6927509293301319, Validation Loss: 0.6929419459559629Epoch 65/500, Training Loss: 0.6927509380502498, Validation Loss: 0.6929419447981723Epoch 66/500, Training Loss: 0.6927509467519656, Validation Loss: 0.6929419435602453Epoch 67/500, Training Loss: 0.6927509554373993, Validation Loss: 0.692941942254786Epoch 68/500, Training Loss: 0.692750964108287, Validation Loss: 0.6929419408920925Epoch 69/500, Training Loss: 0.6927509727660577, Validation Loss: 0.6929419394805773Epoch 70/500, Training Loss: 0.692750981411885, Validation Loss: 0.6929419380271037Epoch 71/500, Training Loss: 0.6927509900467351, Validation Loss: 0.6929419365372712Epoch 72/500, Training Loss: 0.692750998671402, Validation Loss: 0.6929419350156406Epoch 73/500, Training Loss: 0.6927510072865405, Validation Loss: 0.692941933465924Epoch 74/500, Training Loss: 0.6927510158926926, Validation Loss: 0.6929419318911354Epoch 75/500, Training Loss: 0.6927510244903026, Validation Loss: 0.69294193029372Epoch 76/500, Training Loss: 0.6927510330797415, Validation Loss: 0.6929419286756551Epoch 77/500, Training Loss: 0.692751041661315, Validation Loss: 0.6929419270385362Epoch 78/500, Training Loss: 0.6927510502352775, Validation Loss: 0.692941925383644Epoch 79/500, Training Loss: 0.692751058801843, Validation Loss: 0.6929419237120025Epoch 80/500, Training Loss: 0.6927510673611863, Validation Loss: 0.6929419220244247Epoch 81/500, Training Loss: 0.6927510759134582, Validation Loss: 0.692941920321551Epoch 82/500, Training Loss: 0.6927510844587819, Validation Loss: 0.6929419186038798Epoch 83/500, Training Loss: 0.6927510929972661, Validation Loss: 0.6929419168717941Epoch 84/500, Training Loss: 0.6927511015289992, Validation Loss: 0.6929419151255795Epoch 85/500, Training Loss: 0.6927511100540581, Validation Loss: 0.6929419133654455Epoch 86/500, Training Loss: 0.6927511185725098, Validation Loss: 0.6929419115915356Epoch 87/500, Training Loss: 0.6927511270844111, Validation Loss: 0.6929419098039402Epoch 88/500, Training Loss: 0.6927511355898133, Validation Loss: 0.6929419080027069Epoch 89/500, Training Loss: 0.6927511440887588, Validation Loss: 0.6929419061878468Epoch 90/500, Training Loss: 0.6927511525812866, Validation Loss: 0.6929419043593416Epoch 91/500, Training Loss: 0.6927511610674321, Validation Loss: 0.6929419025171482Epoch 92/500, Training Loss: 0.6927511695472255, Validation Loss: 0.6929419006612026Epoch 93/500, Training Loss: 0.6927511780206947, Validation Loss: 0.6929418987914245Epoch 94/500, Training Loss: 0.692751186487866, Validation Loss: 0.6929418969077195Epoch 95/500, Training Loss: 0.6927511949487635, Validation Loss: 0.6929418950099813Epoch 96/500, Training Loss: 0.6927512034034061, Validation Loss: 0.6929418930980927Epoch 97/500, Training Loss: 0.6927512118518164, Validation Loss: 0.6929418911719287Epoch 98/500, Training Loss: 0.6927512202940116, Validation Loss: 0.6929418892313571Epoch 99/500, Training Loss: 0.6927512287300085, Validation Loss: 0.6929418872762401Epoch 100/500, Training Loss: 0.692751237159825, Validation Loss: 0.6929418853064339Epoch 101/500, Training Loss: 0.6927512455834748, Validation Loss: 0.6929418833217891Epoch 102/500, Training Loss: 0.6927512540009733, Validation Loss: 0.692941881322154Epoch 103/500, Training Loss: 0.6927512624123324, Validation Loss: 0.692941879307372Epoch 104/500, Training Loss: 0.6927512708175669, Validation Loss: 0.6929418772772837Epoch 105/500, Training Loss: 0.6927512792166878, Validation Loss: 0.6929418752317261Epoch 106/500, Training Loss: 0.6927512876097064, Validation Loss: 0.6929418731705341Epoch 107/500, Training Loss: 0.6927512959966328, Validation Loss: 0.6929418710935396Epoch 108/500, Training Loss: 0.6927513043774772, Validation Loss: 0.6929418690005718Epoch 109/500, Training Loss: 0.6927513127522499, Validation Loss: 0.6929418668914576Epoch 110/500, Training Loss: 0.692751321120959, Validation Loss: 0.6929418647660228Epoch 111/500, Training Loss: 0.6927513294836121, Validation Loss: 0.6929418626240896Epoch 112/500, Training Loss: 0.692751337840219, Validation Loss: 0.6929418604654777Epoch 113/500, Training Loss: 0.6927513461907839, Validation Loss: 0.6929418582900063Epoch 114/500, Training Loss: 0.692751354535314, Validation Loss: 0.6929418560974921Epoch 115/500, Training Loss: 0.6927513628738157, Validation Loss: 0.6929418538877488Epoch 116/500, Training Loss: 0.6927513712062949, Validation Loss: 0.6929418516605887Epoch 117/500, Training Loss: 0.6927513795327535, Validation Loss: 0.6929418494158218Epoch 118/500, Training Loss: 0.6927513878531983, Validation Loss: 0.6929418471532565Epoch 119/500, Training Loss: 0.6927513961676305, Validation Loss: 0.6929418448726985Epoch 120/500, Training Loss: 0.692751404476054, Validation Loss: 0.6929418425739524Epoch 121/500, Training Loss: 0.6927514127784706, Validation Loss: 0.6929418402568197Epoch 122/500, Training Loss: 0.6927514210748817, Validation Loss: 0.6929418379211001Epoch 123/500, Training Loss: 0.6927514293652883, Validation Loss: 0.692941835566591Epoch 124/500, Training Loss: 0.6927514376496905, Validation Loss: 0.6929418331930882Epoch 125/500, Training Loss: 0.692751445928087, Validation Loss: 0.692941830800385Epoch 126/500, Training Loss: 0.6927514542004782, Validation Loss: 0.6929418283882726Epoch 127/500, Training Loss: 0.6927514624668605, Validation Loss: 0.692941825956539Epoch 128/500, Training Loss: 0.6927514707272326, Validation Loss: 0.692941823504972Epoch 129/500, Training Loss: 0.6927514789815914, Validation Loss: 0.6929418210333551Epoch 130/500, Training Loss: 0.6927514872299316, Validation Loss: 0.6929418185414707Epoch 131/500, Training Loss: 0.6927514954722496, Validation Loss: 0.6929418160290979Epoch 132/500, Training Loss: 0.6927515037085396, Validation Loss: 0.6929418134960145Epoch 133/500, Training Loss: 0.6927515119387959, Validation Loss: 0.6929418109419945Epoch 134/500, Training Loss: 0.6927515201630102, Validation Loss: 0.6929418083668109Epoch 135/500, Training Loss: 0.6927515283811767, Validation Loss: 0.6929418057702332Epoch 136/500, Training Loss: 0.6927515365932867, Validation Loss: 0.6929418031520287Epoch 137/500, Training Loss: 0.6927515447993298, Validation Loss: 0.6929418005119624Epoch 138/500, Training Loss: 0.6927515529992956, Validation Loss: 0.6929417978497961Epoch 139/500, Training Loss: 0.6927515611931752, Validation Loss: 0.6929417951652896Epoch 140/500, Training Loss: 0.6927515693809555, Validation Loss: 0.6929417924581996Epoch 141/500, Training Loss: 0.6927515775626245, Validation Loss: 0.69294178972828Epoch 142/500, Training Loss: 0.6927515857381684, Validation Loss: 0.6929417869752827Epoch 143/500, Training Loss: 0.692751593907573, Validation Loss: 0.6929417841989557Epoch 144/500, Training Loss: 0.6927516020708241, Validation Loss: 0.6929417813990446Epoch 145/500, Training Loss: 0.6927516102279038, Validation Loss: 0.6929417785752928Epoch 146/500, Training Loss: 0.6927516183787971, Validation Loss: 0.6929417757274396Epoch 147/500, Training Loss: 0.6927516265234849, Validation Loss: 0.692941772855223Epoch 148/500, Training Loss: 0.6927516346619494, Validation Loss: 0.6929417699583758Epoch 149/500, Training Loss: 0.6927516427941696, Validation Loss: 0.6929417670366289Epoch 150/500, Training Loss: 0.6927516509201246, Validation Loss: 0.6929417640897111Epoch 151/500, Training Loss: 0.692751659039794, Validation Loss: 0.6929417611173456Epoch 152/500, Training Loss: 0.6927516671531553, Validation Loss: 0.6929417581192552Epoch 153/500, Training Loss: 0.6927516752601826, Validation Loss: 0.6929417550951578Epoch 154/500, Training Loss: 0.6927516833608529, Validation Loss: 0.6929417520447678Epoch 155/500, Training Loss: 0.6927516914551398, Validation Loss: 0.6929417489677973Epoch 156/500, Training Loss: 0.692751699543016, Validation Loss: 0.692941745863954Epoch 157/500, Training Loss: 0.6927517076244535, Validation Loss: 0.6929417427329434Epoch 158/500, Training Loss: 0.6927517156994236, Validation Loss: 0.6929417395744657Epoch 159/500, Training Loss: 0.6927517237678957, Validation Loss: 0.6929417363882199Epoch 160/500, Training Loss: 0.6927517318298382, Validation Loss: 0.6929417331738996Epoch 161/500, Training Loss: 0.6927517398852194, Validation Loss: 0.6929417299311954Epoch 162/500, Training Loss: 0.6927517479340038, Validation Loss: 0.6929417266597939Epoch 163/500, Training Loss: 0.6927517559761585, Validation Loss: 0.6929417233593778Epoch 164/500, Training Loss: 0.6927517640116454, Validation Loss: 0.6929417200296274Epoch 165/500, Training Loss: 0.6927517720404278, Validation Loss: 0.6929417166702174Epoch 166/500, Training Loss: 0.6927517800624668, Validation Loss: 0.6929417132808192Epoch 167/500, Training Loss: 0.6927517880777226, Validation Loss: 0.6929417098611005Epoch 168/500, Training Loss: 0.6927517960861538, Validation Loss: 0.6929417064107246Epoch 169/500, Training Loss: 0.6927518040877174, Validation Loss: 0.6929417029293506Epoch 170/500, Training Loss: 0.6927518120823687, Validation Loss: 0.692941699416634Epoch 171/500, Training Loss: 0.6927518200700636, Validation Loss: 0.6929416958722252Epoch 172/500, Training Loss: 0.6927518280507542, Validation Loss: 0.6929416922957714Epoch 173/500, Training Loss: 0.692751836024392, Validation Loss: 0.6929416886869141Epoch 174/500, Training Loss: 0.6927518439909289, Validation Loss: 0.6929416850452916Epoch 175/500, Training Loss: 0.6927518519503111, Validation Loss: 0.6929416813705372Epoch 176/500, Training Loss: 0.6927518599024869, Validation Loss: 0.6929416776622789Epoch 177/500, Training Loss: 0.6927518678474024, Validation Loss: 0.6929416739201415Epoch 178/500, Training Loss: 0.692751875785001, Validation Loss: 0.6929416701437436Epoch 179/500, Training Loss: 0.6927518837152266, Validation Loss: 0.6929416663327009Epoch 180/500, Training Loss: 0.6927518916380176, Validation Loss: 0.6929416624866219Epoch 181/500, Training Loss: 0.6927518995533147, Validation Loss: 0.6929416586051116Epoch 182/500, Training Loss: 0.6927519074610555, Validation Loss: 0.6929416546877701Epoch 183/500, Training Loss: 0.6927519153611752, Validation Loss: 0.6929416507341913Epoch 184/500, Training Loss: 0.6927519232536079, Validation Loss: 0.6929416467439659Epoch 185/500, Training Loss: 0.6927519311382868, Validation Loss: 0.6929416427166768Epoch 186/500, Training Loss: 0.692751939015141, Validation Loss: 0.6929416386519034Epoch 187/500, Training Loss: 0.6927519468841, Validation Loss: 0.6929416345492192Epoch 188/500, Training Loss: 0.6927519547450895, Validation Loss: 0.6929416304081917Epoch 189/500, Training Loss: 0.6927519625980353, Validation Loss: 0.6929416262283833Epoch 190/500, Training Loss: 0.6927519704428596, Validation Loss: 0.6929416220093507Epoch 191/500, Training Loss: 0.6927519782794842, Validation Loss: 0.6929416177506453Epoch 192/500, Training Loss: 0.692751986107827, Validation Loss: 0.6929416134518116Epoch 193/500, Training Loss: 0.6927519939278053, Validation Loss: 0.6929416091123881Epoch 194/500, Training Loss: 0.6927520017393334, Validation Loss: 0.6929416047319084Epoch 195/500, Training Loss: 0.6927520095423236, Validation Loss: 0.692941600309899Epoch 196/500, Training Loss: 0.6927520173366866, Validation Loss: 0.6929415958458807Epoch 197/500, Training Loss: 0.6927520251223315, Validation Loss: 0.6929415913393677Epoch 198/500, Training Loss: 0.692752032899163, Validation Loss: 0.6929415867898675Epoch 199/500, Training Loss: 0.6927520406670846, Validation Loss: 0.6929415821968813Epoch 200/500, Training Loss: 0.6927520484259989, Validation Loss: 0.6929415775599036Epoch 201/500, Training Loss: 0.6927520561758037, Validation Loss: 0.6929415728784221Epoch 202/500, Training Loss: 0.6927520639163952, Validation Loss: 0.692941568151918Epoch 203/500, Training Loss: 0.6927520716476683, Validation Loss: 0.6929415633798647Epoch 204/500, Training Loss: 0.6927520793695136, Validation Loss: 0.6929415585617289Epoch 205/500, Training Loss: 0.6927520870818216, Validation Loss: 0.6929415536969705Epoch 206/500, Training Loss: 0.6927520947844762, Validation Loss: 0.6929415487850417Epoch 207/500, Training Loss: 0.6927521024773624, Validation Loss: 0.6929415438253865Epoch 208/500, Training Loss: 0.6927521101603614, Validation Loss: 0.6929415388174429Epoch 209/500, Training Loss: 0.6927521178333501, Validation Loss: 0.6929415337606395Epoch 210/500, Training Loss: 0.6927521254962055, Validation Loss: 0.6929415286543982Epoch 211/500, Training Loss: 0.6927521331487978, Validation Loss: 0.692941523498132Epoch 212/500, Training Loss: 0.6927521407909987, Validation Loss: 0.6929415182912475Epoch 213/500, Training Loss: 0.6927521484226735, Validation Loss: 0.692941513033141Epoch 214/500, Training Loss: 0.6927521560436852, Validation Loss: 0.6929415077232017Epoch 215/500, Training Loss: 0.692752163653896, Validation Loss: 0.6929415023608095Epoch 216/500, Training Loss: 0.6927521712531602, Validation Loss: 0.6929414969453365Epoch 217/500, Training Loss: 0.6927521788413343, Validation Loss: 0.6929414914761448Epoch 218/500, Training Loss: 0.6927521864182687, Validation Loss: 0.69294148595259Epoch 219/500, Training Loss: 0.6927521939838092, Validation Loss: 0.6929414803740146Epoch 220/500, Training Loss: 0.6927522015378001, Validation Loss: 0.6929414747397558Epoch 221/500, Training Loss: 0.6927522090800833, Validation Loss: 0.6929414690491387Epoch 222/500, Training Loss: 0.6927522166104942, Validation Loss: 0.6929414633014799Epoch 223/500, Training Loss: 0.6927522241288667, Validation Loss: 0.6929414574960866Epoch 224/500, Training Loss: 0.6927522316350293, Validation Loss: 0.6929414516322552Epoch 225/500, Training Loss: 0.6927522391288099, Validation Loss: 0.6929414457092731Epoch 226/500, Training Loss: 0.6927522466100281, Validation Loss: 0.6929414397264162Epoch 227/500, Training Loss: 0.6927522540785025, Validation Loss: 0.6929414336829506Epoch 228/500, Training Loss: 0.6927522615340483, Validation Loss: 0.6929414275781317Epoch 229/500, Training Loss: 0.6927522689764744, Validation Loss: 0.6929414214112042Epoch 230/500, Training Loss: 0.692752276405587, Validation Loss: 0.6929414151814024Epoch 231/500, Training Loss: 0.6927522838211871, Validation Loss: 0.6929414088879475Epoch 232/500, Training Loss: 0.6927522912230726, Validation Loss: 0.692941402530052Epoch 233/500, Training Loss: 0.6927522986110363, Validation Loss: 0.6929413961069147Epoch 234/500, Training Loss: 0.6927523059848661, Validation Loss: 0.6929413896177234Epoch 235/500, Training Loss: 0.6927523133443457, Validation Loss: 0.6929413830616544Epoch 236/500, Training Loss: 0.692752320689254, Validation Loss: 0.6929413764378701Epoch 237/500, Training Loss: 0.6927523280193661, Validation Loss: 0.6929413697455232Epoch 238/500, Training Loss: 0.6927523353344512, Validation Loss: 0.6929413629837515Epoch 239/500, Training Loss: 0.6927523426342731, Validation Loss: 0.6929413561516811Epoch 240/500, Training Loss: 0.6927523499185915, Validation Loss: 0.6929413492484238Epoch 241/500, Training Loss: 0.6927523571871608, Validation Loss: 0.6929413422730798Epoch 242/500, Training Loss: 0.6927523644397301, Validation Loss: 0.6929413352247347Epoch 243/500, Training Loss: 0.6927523716760432, Validation Loss: 0.69294132810246Epoch 244/500, Training Loss: 0.6927523788958357, Validation Loss: 0.6929413209053148Epoch 245/500, Training Loss: 0.6927523860988429, Validation Loss: 0.6929413136323413Epoch 246/500, Training Loss: 0.6927523932847909, Validation Loss: 0.69294130628257Epoch 247/500, Training Loss: 0.6927524004534, Validation Loss: 0.6929412988550141Epoch 248/500, Training Loss: 0.6927524076043848, Validation Loss: 0.6929412913486734Epoch 249/500, Training Loss: 0.6927524147374549, Validation Loss: 0.6929412837625317Epoch 250/500, Training Loss: 0.6927524218523129, Validation Loss: 0.6929412760955571Epoch 251/500, Training Loss: 0.6927524289486536, Validation Loss: 0.6929412683467018Epoch 252/500, Training Loss: 0.6927524360261688, Validation Loss: 0.6929412605149018Epoch 253/500, Training Loss: 0.6927524430845403, Validation Loss: 0.6929412525990767Epoch 254/500, Training Loss: 0.692752450123445, Validation Loss: 0.6929412445981293Epoch 255/500, Training Loss: 0.6927524571425526, Validation Loss: 0.692941236510946Epoch 256/500, Training Loss: 0.6927524641415255, Validation Loss: 0.6929412283363933Epoch 257/500, Training Loss: 0.6927524711200179, Validation Loss: 0.6929412200733223Epoch 258/500, Training Loss: 0.6927524780776808, Validation Loss: 0.6929412117205651Epoch 259/500, Training Loss: 0.69275248501415, Validation Loss: 0.6929412032769364Epoch 260/500, Training Loss: 0.6927524919290637, Validation Loss: 0.69294119474123Epoch 261/500, Training Loss: 0.6927524988220426, Validation Loss: 0.6929411861122221Epoch 262/500, Training Loss: 0.6927525056927059, Validation Loss: 0.6929411773886691Epoch 263/500, Training Loss: 0.6927525125406617, Validation Loss: 0.6929411685693074Epoch 264/500, Training Loss: 0.6927525193655111, Validation Loss: 0.6929411596528537Epoch 265/500, Training Loss: 0.6927525261668459, Validation Loss: 0.6929411506380024Epoch 266/500, Training Loss: 0.6927525329442492, Validation Loss: 0.6929411415234291Epoch 267/500, Training Loss: 0.6927525396972963, Validation Loss: 0.6929411323077864Epoch 268/500, Training Loss: 0.6927525464255517, Validation Loss: 0.6929411229897054Epoch 269/500, Training Loss: 0.6927525531285732, Validation Loss: 0.6929411135677952Epoch 270/500, Training Loss: 0.6927525598059054, Validation Loss: 0.692941104040642Epoch 271/500, Training Loss: 0.692752566457087, Validation Loss: 0.6929410944068091Epoch 272/500, Training Loss: 0.6927525730816451, Validation Loss: 0.692941084664836Epoch 273/500, Training Loss: 0.6927525796790962, Validation Loss: 0.6929410748132377Epoch 274/500, Training Loss: 0.6927525862489475, Validation Loss: 0.6929410648505054Epoch 275/500, Training Loss: 0.6927525927906957, Validation Loss: 0.6929410547751053Epoch 276/500, Training Loss: 0.6927525993038245, Validation Loss: 0.6929410445854776Epoch 277/500, Training Loss: 0.692752605787811, Validation Loss: 0.6929410342800363Epoch 278/500, Training Loss: 0.692752612242117, Validation Loss: 0.6929410238571705Epoch 279/500, Training Loss: 0.6927526186661946, Validation Loss: 0.69294101331524Epoch 280/500, Training Loss: 0.6927526250594831, Validation Loss: 0.6929410026525791Epoch 281/500, Training Loss: 0.6927526314214104, Validation Loss: 0.6929409918674919Epoch 282/500, Training Loss: 0.6927526377513942, Validation Loss: 0.6929409809582561Epoch 283/500, Training Loss: 0.6927526440488364, Validation Loss: 0.6929409699231184Epoch 284/500, Training Loss: 0.6927526503131273, Validation Loss: 0.6929409587602962Epoch 285/500, Training Loss: 0.692752656543644, Validation Loss: 0.6929409474679766Epoch 286/500, Training Loss: 0.6927526627397503, Validation Loss: 0.692940936044316Epoch 287/500, Training Loss: 0.6927526689007963, Validation Loss: 0.6929409244874383Epoch 288/500, Training Loss: 0.6927526750261184, Validation Loss: 0.6929409127954355Epoch 289/500, Training Loss: 0.6927526811150381, Validation Loss: 0.6929409009663667Epoch 290/500, Training Loss: 0.6927526871668626, Validation Loss: 0.6929408889982567Epoch 291/500, Training Loss: 0.692752693180883, Validation Loss: 0.6929408768890972Epoch 292/500, Training Loss: 0.6927526991563764, Validation Loss: 0.6929408646368435Epoch 293/500, Training Loss: 0.6927527050926041, Validation Loss: 0.6929408522394153Epoch 294/500, Training Loss: 0.6927527109888114, Validation Loss: 0.6929408396946972Epoch 295/500, Training Loss: 0.6927527168442266, Validation Loss: 0.6929408270005348Epoch 296/500, Training Loss: 0.692752722658059, Validation Loss: 0.6929408141547366Epoch 297/500, Training Loss: 0.6927527284295063, Validation Loss: 0.692940801155072Epoch 298/500, Training Loss: 0.6927527341577421, Validation Loss: 0.69294078799927Epoch 299/500, Training Loss: 0.6927527398419283, Validation Loss: 0.6929407746850205Epoch 300/500, Training Loss: 0.6927527454812035, Validation Loss: 0.6929407612099707Epoch 301/500, Training Loss: 0.6927527510746891, Validation Loss: 0.6929407475717267Epoch 302/500, Training Loss: 0.6927527566214875, Validation Loss: 0.6929407337678506Epoch 303/500, Training Loss: 0.6927527621206815, Validation Loss: 0.6929407197958608Epoch 304/500, Training Loss: 0.6927527675713329, Validation Loss: 0.6929407056532308Epoch 305/500, Training Loss: 0.6927527729724843, Validation Loss: 0.6929406913373876Epoch 306/500, Training Loss: 0.6927527783231537, Validation Loss: 0.6929406768457119Epoch 307/500, Training Loss: 0.6927527836223412, Validation Loss: 0.6929406621755357Epoch 308/500, Training Loss: 0.6927527888690237, Validation Loss: 0.692940647324144Epoch 309/500, Training Loss: 0.6927527940621532, Validation Loss: 0.6929406322887683Epoch 310/500, Training Loss: 0.6927527992006606, Validation Loss: 0.6929406170665926Epoch 311/500, Training Loss: 0.6927528042834531, Validation Loss: 0.6929406016547454Epoch 312/500, Training Loss: 0.6927528093094122, Validation Loss: 0.6929405860503038Epoch 313/500, Training Loss: 0.6927528142773947, Validation Loss: 0.6929405702502903Epoch 314/500, Training Loss: 0.6927528191862328, Validation Loss: 0.6929405542516706Epoch 315/500, Training Loss: 0.6927528240347308, Validation Loss: 0.6929405380513538Epoch 316/500, Training Loss: 0.6927528288216677, Validation Loss: 0.6929405216461909Epoch 317/500, Training Loss: 0.6927528335457935, Validation Loss: 0.6929405050329734Epoch 318/500, Training Loss: 0.6927528382058313, Validation Loss: 0.6929404882084312Epoch 319/500, Training Loss: 0.6927528428004754, Validation Loss: 0.6929404711692327Epoch 320/500, Training Loss: 0.6927528473283888, Validation Loss: 0.6929404539119819Epoch 321/500, Training Loss: 0.6927528517882049, Validation Loss: 0.6929404364332186Epoch 322/500, Training Loss: 0.6927528561785274, Validation Loss: 0.6929404187294148Epoch 323/500, Training Loss: 0.6927528604979265, Validation Loss: 0.6929404007969752Epoch 324/500, Training Loss: 0.6927528647449409, Validation Loss: 0.6929403826322347Epoch 325/500, Training Loss: 0.6927528689180746, Validation Loss: 0.6929403642314564Epoch 326/500, Training Loss: 0.6927528730157968, Validation Loss: 0.6929403455908318Epoch 327/500, Training Loss: 0.6927528770365448, Validation Loss: 0.6929403267064763Epoch 328/500, Training Loss: 0.6927528809787171, Validation Loss: 0.6929403075744298Epoch 329/500, Training Loss: 0.6927528848406753, Validation Loss: 0.6929402881906539Epoch 330/500, Training Loss: 0.6927528886207429, Validation Loss: 0.6929402685510301Epoch 331/500, Training Loss: 0.6927528923172076, Validation Loss: 0.6929402486513592Epoch 332/500, Training Loss: 0.6927528959283119, Validation Loss: 0.6929402284873565Epoch 333/500, Training Loss: 0.6927528994522629, Validation Loss: 0.6929402080546534Epoch 334/500, Training Loss: 0.6927529028872218, Validation Loss: 0.6929401873487928Epoch 335/500, Training Loss: 0.6927529062313091, Validation Loss: 0.6929401663652277Epoch 336/500, Training Loss: 0.6927529094826002, Validation Loss: 0.6929401450993194Epoch 337/500, Training Loss: 0.6927529126391249, Validation Loss: 0.6929401235463355Epoch 338/500, Training Loss: 0.6927529156988687, Validation Loss: 0.6929401017014473Epoch 339/500, Training Loss: 0.6927529186597653, Validation Loss: 0.6929400795597261Epoch 340/500, Training Loss: 0.6927529215197032, Validation Loss: 0.6929400571161441Epoch 341/500, Training Loss: 0.6927529242765217, Validation Loss: 0.6929400343655686Epoch 342/500, Training Loss: 0.6927529269280044, Validation Loss: 0.6929400113027622Epoch 343/500, Training Loss: 0.6927529294718854, Validation Loss: 0.6929399879223779Epoch 344/500, Training Loss: 0.6927529319058411, Validation Loss: 0.6929399642189581Epoch 345/500, Training Loss: 0.6927529342274983, Validation Loss: 0.6929399401869305Epoch 346/500, Training Loss: 0.6927529364344195, Validation Loss: 0.6929399158206068Epoch 347/500, Training Loss: 0.6927529385241147, Validation Loss: 0.6929398911141798Epoch 348/500, Training Loss: 0.6927529404940298, Validation Loss: 0.6929398660617171Epoch 349/500, Training Loss: 0.6927529423415486, Validation Loss: 0.6929398406571636Epoch 350/500, Training Loss: 0.6927529440639961, Validation Loss: 0.6929398148943335Epoch 351/500, Training Loss: 0.6927529456586256, Validation Loss: 0.6929397887669102Epoch 352/500, Training Loss: 0.6927529471226288, Validation Loss: 0.6929397622684407Epoch 353/500, Training Loss: 0.6927529484531251, Validation Loss: 0.6929397353923342Epoch 354/500, Training Loss: 0.692752949647165, Validation Loss: 0.6929397081318565Epoch 355/500, Training Loss: 0.6927529507017262, Validation Loss: 0.6929396804801289Epoch 356/500, Training Loss: 0.6927529516137112, Validation Loss: 0.6929396524301218Epoch 357/500, Training Loss: 0.6927529523799452, Validation Loss: 0.6929396239746527Epoch 358/500, Training Loss: 0.6927529529971769, Validation Loss: 0.6929395951063823Epoch 359/500, Training Loss: 0.6927529534620706, Validation Loss: 0.6929395658178082Epoch 360/500, Training Loss: 0.6927529537712108, Validation Loss: 0.692939536101263Epoch 361/500, Training Loss: 0.6927529539210946, Validation Loss: 0.6929395059489112Epoch 362/500, Training Loss: 0.6927529539081317, Validation Loss: 0.6929394753527391Epoch 363/500, Training Loss: 0.6927529537286414, Validation Loss: 0.6929394443045578Epoch 364/500, Training Loss: 0.6927529533788486, Validation Loss: 0.6929394127959921Epoch 365/500, Training Loss: 0.6927529528548854, Validation Loss: 0.692939380818479Epoch 366/500, Training Loss: 0.6927529521527812, Validation Loss: 0.692939348363262Epoch 367/500, Training Loss: 0.6927529512684703, Validation Loss: 0.6929393154213856Epoch 368/500, Training Loss: 0.6927529501977772, Validation Loss: 0.6929392819836907Epoch 369/500, Training Loss: 0.692752948936423, Validation Loss: 0.692939248040808Epoch 370/500, Training Loss: 0.6927529474800174, Validation Loss: 0.6929392135831527Epoch 371/500, Training Loss: 0.6927529458240562, Validation Loss: 0.69293917860092Epoch 372/500, Training Loss: 0.6927529439639204, Validation Loss: 0.6929391430840771Epoch 373/500, Training Loss: 0.6927529418948672, Validation Loss: 0.6929391070223584Epoch 374/500, Training Loss: 0.6927529396120374, Validation Loss: 0.6929390704052578Epoch 375/500, Training Loss: 0.6927529371104372, Validation Loss: 0.6929390332220244Epoch 376/500, Training Loss: 0.6927529343849457, Validation Loss: 0.6929389954616528Epoch 377/500, Training Loss: 0.6927529314303088, Validation Loss: 0.692938957112878Epoch 378/500, Training Loss: 0.6927529282411282, Validation Loss: 0.6929389181641679Epoch 379/500, Training Loss: 0.6927529248118686, Validation Loss: 0.6929388786037147Epoch 380/500, Training Loss: 0.6927529211368443, Validation Loss: 0.6929388384194294Epoch 381/500, Training Loss: 0.6927529172102191, Validation Loss: 0.6929387975989306Epoch 382/500, Training Loss: 0.6927529130260011, Validation Loss: 0.6929387561295401Epoch 383/500, Training Loss: 0.6927529085780371, Validation Loss: 0.6929387139982706Epoch 384/500, Training Loss: 0.692752903860006, Validation Loss: 0.69293867119182Epoch 385/500, Training Loss: 0.6927528988654215, Validation Loss: 0.6929386276965609Epoch 386/500, Training Loss: 0.6927528935876153, Validation Loss: 0.6929385834985295Epoch 387/500, Training Loss: 0.6927528880197424, Validation Loss: 0.6929385385834216Epoch 388/500, Training Loss: 0.6927528821547687, Validation Loss: 0.6929384929365751Epoch 389/500, Training Loss: 0.6927528759854673, Validation Loss: 0.692938446542965Epoch 390/500, Training Loss: 0.6927528695044144, Validation Loss: 0.6929383993871909Epoch 391/500, Training Loss: 0.6927528627039777, Validation Loss: 0.6929383514534667Epoch 392/500, Training Loss: 0.6927528555763177, Validation Loss: 0.6929383027256067Epoch 393/500, Training Loss: 0.6927528481133762, Validation Loss: 0.6929382531870183Epoch 394/500, Training Loss: 0.692752840306867, Validation Loss: 0.692938202820685Epoch 395/500, Training Loss: 0.6927528321482757, Validation Loss: 0.6929381516091568Epoch 396/500, Training Loss: 0.6927528236288454, Validation Loss: 0.6929380995345364Epoch 397/500, Training Loss: 0.6927528147395768, Validation Loss: 0.6929380465784655Epoch 398/500, Training Loss: 0.6927528054712118, Validation Loss: 0.6929379927221098Epoch 399/500, Training Loss: 0.6927527958142312, Validation Loss: 0.6929379379461471Epoch 400/500, Training Loss: 0.6927527857588422, Validation Loss: 0.6929378822307505Epoch 401/500, Training Loss: 0.6927527752949756, Validation Loss: 0.6929378255555715Epoch 402/500, Training Loss: 0.6927527644122676, Validation Loss: 0.6929377678997272Epoch 403/500, Training Loss: 0.69275275310006, Validation Loss: 0.6929377092417813Epoch 404/500, Training Loss: 0.6927527413473845, Validation Loss: 0.6929376495597277Epoch 405/500, Training Loss: 0.692752729142952, Validation Loss: 0.6929375888309723Epoch 406/500, Training Loss: 0.6927527164751461, Validation Loss: 0.6929375270323154Epoch 407/500, Training Loss: 0.6927527033320098, Validation Loss: 0.6929374641399312Epoch 408/500, Training Loss: 0.6927526897012328, Validation Loss: 0.6929374001293477Epoch 409/500, Training Loss: 0.6927526755701419, Validation Loss: 0.6929373349754285Epoch 410/500, Training Loss: 0.6927526609256884, Validation Loss: 0.6929372686523482Epoch 411/500, Training Loss: 0.6927526457544356, Validation Loss: 0.6929372011335738Epoch 412/500, Training Loss: 0.6927526300425434, Validation Loss: 0.692937132391838Epoch 413/500, Training Loss: 0.6927526137757568, Validation Loss: 0.6929370623991176Epoch 414/500, Training Loss: 0.6927525969393915, Validation Loss: 0.692936991126609Epoch 415/500, Training Loss: 0.6927525795183185, Validation Loss: 0.6929369185447014Epoch 416/500, Training Loss: 0.6927525614969479, Validation Loss: 0.6929368446229497Epoch 417/500, Training Loss: 0.6927525428592137, Validation Loss: 0.6929367693300502Epoch 418/500, Training Loss: 0.6927525235885599, Validation Loss: 0.6929366926338065Epoch 419/500, Training Loss: 0.6927525036679165, Validation Loss: 0.6929366145011049Epoch 420/500, Training Loss: 0.69275248307969, Validation Loss: 0.6929365348978801Epoch 421/500, Training Loss: 0.6927524618057361, Validation Loss: 0.692936453789084Epoch 422/500, Training Loss: 0.6927524398273489, Validation Loss: 0.6929363711386529Epoch 423/500, Training Loss: 0.6927524171252332, Validation Loss: 0.6929362869094721Epoch 424/500, Training Loss: 0.6927523936794877, Validation Loss: 0.6929362010633404Epoch 425/500, Training Loss: 0.692752369469581, Validation Loss: 0.6929361135609313Epoch 426/500, Training Loss: 0.6927523444743307, Validation Loss: 0.6929360243617559Epoch 427/500, Training Loss: 0.6927523186718794, Validation Loss: 0.6929359334241199Epoch 428/500, Training Loss: 0.6927522920396674, Validation Loss: 0.6929358407050837Epoch 429/500, Training Loss: 0.6927522645544104, Validation Loss: 0.6929357461604181Epoch 430/500, Training Loss: 0.692752236192071, Validation Loss: 0.6929356497445566Epoch 431/500, Training Loss: 0.6927522069278302, Validation Loss: 0.6929355514105505Epoch 432/500, Training Loss: 0.6927521767360603, Validation Loss: 0.692935451110018Epoch 433/500, Training Loss: 0.6927521455902909, Validation Loss: 0.6929353487930926Epoch 434/500, Training Loss: 0.6927521134631812, Validation Loss: 0.6929352444083696Epoch 435/500, Training Loss: 0.6927520803264872, Validation Loss: 0.6929351379028502Epoch 436/500, Training Loss: 0.6927520461510219, Validation Loss: 0.6929350292218838Epoch 437/500, Training Loss: 0.6927520109066245, Validation Loss: 0.6929349183091061Epoch 438/500, Training Loss: 0.6927519745621215, Validation Loss: 0.6929348051063771Epoch 439/500, Training Loss: 0.6927519370852881, Validation Loss: 0.6929346895537142Epoch 440/500, Training Loss: 0.6927518984428044, Validation Loss: 0.692934571589223Epoch 441/500, Training Loss: 0.6927518586002174, Validation Loss: 0.6929344511490276Epoch 442/500, Training Loss: 0.6927518175218931, Validation Loss: 0.6929343281671926Epoch 443/500, Training Loss: 0.6927517751709714, Validation Loss: 0.6929342025756489Epoch 444/500, Training Loss: 0.6927517315093181, Validation Loss: 0.6929340743041078Epoch 445/500, Training Loss: 0.6927516864974689, Validation Loss: 0.6929339432799785Epoch 446/500, Training Loss: 0.692751640094586, Validation Loss: 0.6929338094282788Epoch 447/500, Training Loss: 0.6927515922583939, Validation Loss: 0.6929336726715424Epoch 448/500, Training Loss: 0.692751542945125, Validation Loss: 0.692933532929721Epoch 449/500, Training Loss: 0.692751492109459, Validation Loss: 0.6929333901200843Epoch 450/500, Training Loss: 0.6927514397044571, Validation Loss: 0.6929332441571122Epoch 451/500, Training Loss: 0.6927513856814986, Validation Loss: 0.6929330949523842Epoch 452/500, Training Loss: 0.6927513299902104, Validation Loss: 0.6929329424144663Epoch 453/500, Training Loss: 0.6927512725783899, Validation Loss: 0.6929327864487865Epoch 454/500, Training Loss: 0.6927512133919359, Validation Loss: 0.6929326269575091Epoch 455/500, Training Loss: 0.6927511523747627, Validation Loss: 0.6929324638394023Epoch 456/500, Training Loss: 0.6927510894687191, Validation Loss: 0.6929322969896979Epoch 457/500, Training Loss: 0.6927510246134988, Validation Loss: 0.6929321262999494Epoch 458/500, Training Loss: 0.6927509577465496, Validation Loss: 0.6929319516578736Epoch 459/500, Training Loss: 0.6927508888029761, Validation Loss: 0.6929317729471963Epoch 460/500, Training Loss: 0.6927508177154396, Validation Loss: 0.692931590047482Epoch 461/500, Training Loss: 0.6927507444140468, Validation Loss: 0.6929314028339599Epoch 462/500, Training Loss: 0.6927506688262467, Validation Loss: 0.6929312111773382Epoch 463/500, Training Loss: 0.6927505908767044, Validation Loss: 0.6929310149436131Epoch 464/500, Training Loss: 0.6927505104871842, Validation Loss: 0.692930813993865Epoch 465/500, Training Loss: 0.6927504275764163, Validation Loss: 0.6929306081840461Epoch 466/500, Training Loss: 0.692750342059963, Validation Loss: 0.6929303973647569Epoch 467/500, Training Loss: 0.6927502538500745, Validation Loss: 0.6929301813810136Epoch 468/500, Training Loss: 0.692750162855541, Validation Loss: 0.6929299600719994Epoch 469/500, Training Loss: 0.6927500689815308, Validation Loss: 0.6929297332708084Epoch 470/500, Training Loss: 0.6927499721294279, Validation Loss: 0.6929295008041713Epoch 471/500, Training Loss: 0.6927498721966519, Validation Loss: 0.6929292624921713Epoch 472/500, Training Loss: 0.692749769076479, Validation Loss: 0.6929290181479435Epoch 473/500, Training Loss: 0.6927496626578444, Validation Loss: 0.6929287675773563Epoch 474/500, Training Loss: 0.6927495528251413, Validation Loss: 0.692928510578681Epoch 475/500, Training Loss: 0.6927494394580005, Validation Loss: 0.6929282469422388Epoch 476/500, Training Loss: 0.692749322431067, Validation Loss: 0.6929279764500352Epoch 477/500, Training Loss: 0.6927492016137593, Validation Loss: 0.6929276988753673Epoch 478/500, Training Loss: 0.6927490768700159, Validation Loss: 0.6929274139824155Epoch 479/500, Training Loss: 0.6927489480580313, Validation Loss: 0.6929271215258109Epoch 480/500, Training Loss: 0.6927488150299707, Validation Loss: 0.6929268212501806Epoch 481/500, Training Loss: 0.6927486776316745, Validation Loss: 0.6929265128896651Epoch 482/500, Training Loss: 0.6927485357023456, Validation Loss: 0.6929261961674105Epoch 483/500, Training Loss: 0.6927483890742161, Validation Loss: 0.6929258707950358Epoch 484/500, Training Loss: 0.6927482375721977, Validation Loss: 0.6929255364720629Epoch 485/500, Training Loss: 0.6927480810135084, Validation Loss: 0.6929251928853207Epoch 486/500, Training Loss: 0.6927479192072801, Validation Loss: 0.6929248397083121Epoch 487/500, Training Loss: 0.6927477519541486, Validation Loss: 0.6929244766005463Epoch 488/500, Training Loss: 0.692747579045806, Validation Loss: 0.692924103206832Epoch 489/500, Training Loss: 0.6927474002645413, Validation Loss: 0.6929237191565263Epoch 490/500, Training Loss: 0.6927472153827424, Validation Loss: 0.6929233240627458Epoch 491/500, Training Loss: 0.6927470241623752, Validation Loss: 0.6929229175215266Epoch 492/500, Training Loss: 0.6927468263544284, Validation Loss: 0.692922499110933Epoch 493/500, Training Loss: 0.6927466216983249, Validation Loss: 0.6929220683901176Epoch 494/500, Training Loss: 0.6927464099212954, Validation Loss: 0.6929216248983205Epoch 495/500, Training Loss: 0.6927461907377179, Validation Loss: 0.6929211681538102Epoch 496/500, Training Loss: 0.6927459638484114, Validation Loss: 0.6929206976527553Epoch 497/500, Training Loss: 0.6927457289398855, Validation Loss: 0.6929202128680326Epoch 498/500, Training Loss: 0.6927454856835455, Validation Loss: 0.692919713247955Epoch 499/500, Training Loss: 0.6927452337348485, Validation Loss: 0.6929191982149221Epoch 500/500, Training Loss: 0.6927449727323947, Validation Loss: 0.692918667163987
print(
f"Best Hyperparameters:\n"
f"Learning Rate\tEpoch\tValidation Loss\n"
f"{best_lr}\t\t{best_epoch}\t{best_loss:.4f}"
)
Best Hyperparameters: Learning Rate Epoch Validation Loss 0.01 500 0.2272
# Define the color with C map
color0 = "#121619" # Dark grey
color1 = "#00B050" # Green
color_map = ListedColormap([color0, color1])
# new decision boundary
def new_decision_boundary(
nn, X, y, subplot, cmap=color_map, title="Decision Boundary"
): # "Paired"
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
Z = nn.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.subplot(*subplot)
plt.contourf(xx, yy, Z, alpha=0.5, cmap=color_map)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=color_map, edgecolor="k") # plt.cm.Spectral
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.title(title)
plt.figure(figsize=(12, 5))
# Plot decision boundary on training data
new_decision_boundary(
nn_best,
X_train,
y_train,
subplot=(1, 2, 1),
title="Training Data Decision Boundary",
)
# Plot decision boundary on validation data
new_decision_boundary(
nn_best,
X_val,
y_val,
subplot=(1, 2, 2),
title="Validation Data Decision Boundary",
)
plt.show()
# neural network's predictions
y_scores_nn = nn_best.predict_proba(X_test)
# compute ROC curve and ROC area for your neural network
fpr_nn, tpr_nn, _ = roc_curve(y_test, y_scores_nn)
roc_auc_nn = auc(fpr_nn, tpr_nn)
# Train scikit-learn MLPClassifier
mlp = MLPClassifier(
hidden_layer_sizes=(10, 5), max_iter=2000, learning_rate_init=0.1, random_state=42
)
mlp.fit(X_train, y_train)
# MLPClassifier predictions
y_scores_mlp = mlp.predict_proba(X_test)[:, 1]
# ROC curve and ROC area for MLPClassifier
fpr_mlp, tpr_mlp, _ = roc_curve(y_test, y_scores_mlp)
roc_auc_mlp = auc(fpr_mlp, tpr_mlp)
# plot ROC curve
plt.figure()
plt.plot(
fpr_nn,
tpr_nn,
color="red",
lw=2,
label=f"NN ROC curve (area = {roc_auc_nn:.2f})",
)
plt.plot(
fpr_mlp,
tpr_mlp,
color="blue",
lw=2,
label=f"MLP ROC curve (area = {roc_auc_mlp:.2f})",
)
plt.plot([0, 1], [0, 1], color="salmon", lw=2, linestyle="--", label="Random Guess")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver Operating Characteristic")
plt.legend(loc="lower right")
plt.show()
(c) Suggest two ways in which you neural network implementation could be improved: are there any options we discussed in class that were not included in your implementation that could improve performance?
(c)¶
From the evaluation of questions (a) and (b), it is evident that my custom-trained neural network model achieves a performance comparable to the
scikit-learnMLPClassifier. However, the training process and model architecture have exposed certain issues. Notably, the training is constrained to a relatively low number of epochs; extending beyond 1000 epochs leads to dramatic and incorrect shifts in the loss curves. Furthermore, the decision boundary is not well-defined, despite the training and validation data appearing normal. A significant indicator of overfitting in my model is the MLP ROC curve, which achieves a value of 1. To mitigate these issues and enhance the model's performance, we could introduce regularization techniques such as Lasso Regression, which will help prevent overfitting by penalizing the magnitude of the weights. Applying Lasso Regression regularization involves choosing the right value for λ and integrating the penalty term with the existing loss function. During the training of the neural network, the regularized loss function now not only fits the data but also keeps the weights small, which helps in reducing overfitting and improving model's generalization. Also, we can think of using droupout technique. This is particularly useful when dealing with overfitting. By randomly disabling neurons during training, dropout forces the network to learn more robust features that are not reliant on any small set of neurons. This typically leads to a model that generalizes better to new, unseen data. Additionally, implementing an early stopping protocol could curtail training before the model overfits, using the validation loss as a monitoring guide. These strategies are anticipated to correct the observed problems and lead to an improvement in the generalizability of the model.